Summary:
CircleCI runner based benchmarking. A runner is a dedicate machine configured for CircleCI to perform work on. Our work is a repeatable benchmark, the `benchmark-linux` job in `config.yml`
A runner, in CircleCI terminology, is a machine that is managed by the client (us) rather than running on CircleCI resources in the cloud. This means that we define and configure the iron, and that therefore the performance is repeatable and predictable. Which is what we need for performance regression benchmarking.
On a time schedule (or on commit, during branch development) benchmarks are set off on the runner, and then a script is run `benchmark_log_tool.py` which parses the benchmark output and pushes it into a pre-configured OpenSearch document connected to an OpenSearch dashboard. Members of the team can examine benchmark performance changes on the dashboard.
As time progresses we can add different benchmarks to the suite which gets run.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9723
Reviewed By: pdillinger
Differential Revision: D35555626
Pulled By: jay-zhuang
fbshipit-source-id: c6a905ca04494495c3784cfbb991f5ab90c807ee