RocksDB Trace Analyzer (#4091)

Summary:
A framework of trace analyzing for RocksDB

After collecting the trace by using the tool of [PR #3837](https://github.com/facebook/rocksdb/pull/3837). User can use the Trace Analyzer to interpret, analyze, and characterize the collected workload.
**Input:**
1. trace file
2. Whole keys space file

**Statistics:**
1. Access count of each operation (Get, Put, Delete, SingleDelete, DeleteRange, Merge) in each column family.
2. Key hotness (access count) of each one
3. Key space separation based on given prefix
4. Key size distribution
5. Value size distribution if appliable
6. Top K accessed keys
7. QPS statistics including the average QPS and peak QPS
8. Top K accessed prefix
9. The query correlation analyzing, output the number of X after Y and the corresponding average time
    intervals

**Output:**
1. key access heat map (either in the accessed key space or whole key space)
2. trace sequence file (interpret the raw trace file to line base text file for future use)
3. Time serial (The key space ID and its access time)
4. Key access count distritbution
5. Key size distribution
6. Value size distribution (in each intervals)
7. whole key space separation by the prefix
8. Accessed key space separation by the prefix
9. QPS of each operation and each column family
10. Top K QPS and their accessed prefix range

**Test:**
1. Added the unit test of analyzing Get, Put, Delete, SingleDelete, DeleteRange, Merge
2. Generated the trace and analyze the trace

**Implemented but not tested (due to the limitation of trace_replay):**
1. Analyzing Iterator, supporting Seek() and SeekForPrev() analyzing
2. Analyzing the number of Key found by Get

**Future Work:**
1.  Support execution time analyzing of each requests
2.  Support cache hit situation and block read situation of Get
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4091

Differential Revision: D9256157

Pulled By: zhichao-cao

fbshipit-source-id: f0ceacb7eedbc43a3eee6e85b76087d7832a8fe6
main
Zhichao Cao 6 years ago committed by Facebook Github Bot
parent 1b1d264342
commit 999d955e4f
  1. 2
      .gitignore
  2. 2
      CMakeLists.txt
  3. 1
      HISTORY.md
  4. 8
      Makefile
  5. 6
      TARGETS
  6. 39
      options/options_parser.cc
  7. 2
      src.mk
  8. 25
      tools/trace_analyzer.cc
  9. 689
      tools/trace_analyzer_test.cc
  10. 1798
      tools/trace_analyzer_tool.cc
  11. 271
      tools/trace_analyzer_tool.h
  12. 37
      util/file_reader_writer.cc
  13. 4
      util/file_reader_writer.h

2
.gitignore vendored

@ -45,6 +45,8 @@ etags
rocksdb_dump rocksdb_dump
rocksdb_undump rocksdb_undump
db_test2 db_test2
trace_analyzer
trace_analyzer_test
java/out java/out
java/target java/target

@ -573,6 +573,7 @@ set(SOURCES
tools/ldb_cmd.cc tools/ldb_cmd.cc
tools/ldb_tool.cc tools/ldb_tool.cc
tools/sst_dump_tool.cc tools/sst_dump_tool.cc
tools/trace_analyzer_tool.cc
util/arena.cc util/arena.cc
util/auto_roll_logger.cc util/auto_roll_logger.cc
util/bloom.cc util/bloom.cc
@ -922,6 +923,7 @@ if(WITH_TESTS)
tools/ldb_cmd_test.cc tools/ldb_cmd_test.cc
tools/reduce_levels_test.cc tools/reduce_levels_test.cc
tools/sst_dump_test.cc tools/sst_dump_test.cc
tools/trace_analyzer_test.cc
util/arena_test.cc util/arena_test.cc
util/auto_roll_logger_test.cc util/auto_roll_logger_test.cc
util/autovector_test.cc util/autovector_test.cc

@ -3,6 +3,7 @@
### Public API Change ### Public API Change
### New Features ### New Features
* Changes the format of index blocks by delta encoding the index values, which are the block handles. This saves the encoding of BlockHandle::offset of the non-head index entries in each restart interval. The feature is backward compatible but not forward compatible. It is disabled by default unless format_version 4 or above is used. * Changes the format of index blocks by delta encoding the index values, which are the block handles. This saves the encoding of BlockHandle::offset of the non-head index entries in each restart interval. The feature is backward compatible but not forward compatible. It is disabled by default unless format_version 4 or above is used.
* Add a new tool: trace_analyzer. Trace_analyzer analyzes the trace file generated by using trace_replay API. It can convert the binary format trace file to a human readable txt file, output the statistics of the analyzed query types such as access statistics and size statistics, combining the dumped whole key space file to analyze, support query correlation analyzing, and etc. Current supported query types are: Get, Put, Delete, SingleDelete, DeleteRange, Merge, Iterator (Seek, SeekForPrev only).
### Bug Fixes ### Bug Fixes
* Fix a bug in misreporting the estimated partition index size in properties block. * Fix a bug in misreporting the estimated partition index size in properties block.

@ -530,6 +530,7 @@ TESTS = \
write_prepared_transaction_test \ write_prepared_transaction_test \
write_unprepared_transaction_test \ write_unprepared_transaction_test \
db_universal_compaction_test \ db_universal_compaction_test \
trace_analyzer_test \
PARALLEL_TEST = \ PARALLEL_TEST = \
backupable_db_test \ backupable_db_test \
@ -573,6 +574,7 @@ TOOLS = \
rocksdb_dump \ rocksdb_dump \
rocksdb_undump \ rocksdb_undump \
blob_dump \ blob_dump \
trace_analyzer \
TEST_LIBS = \ TEST_LIBS = \
librocksdb_env_basic_test.a librocksdb_env_basic_test.a
@ -1457,6 +1459,12 @@ options_util_test: utilities/options/options_util_test.o $(LIBOBJECTS) $(TESTHAR
db_bench_tool_test: tools/db_bench_tool_test.o $(BENCHTOOLOBJECTS) $(TESTHARNESS) db_bench_tool_test: tools/db_bench_tool_test.o $(BENCHTOOLOBJECTS) $(TESTHARNESS)
$(AM_LINK) $(AM_LINK)
trace_analyzer: tools/trace_analyzer.o $(LIBOBJECTS)
$(AM_LINK)
trace_analyzer_test: tools/trace_analyzer_test.o $(BENCHTOOLOBJECTS) $(TESTHARNESS)
$(AM_LINK)
event_logger_test: util/event_logger_test.o $(LIBOBJECTS) $(TESTHARNESS) event_logger_test: util/event_logger_test.o $(LIBOBJECTS) $(TESTHARNESS)
$(AM_LINK) $(AM_LINK)

@ -194,6 +194,7 @@ cpp_library(
"tools/ldb_cmd.cc", "tools/ldb_cmd.cc",
"tools/ldb_tool.cc", "tools/ldb_tool.cc",
"tools/sst_dump_tool.cc", "tools/sst_dump_tool.cc",
"tools/trace_analyzer_tool.cc",
"util/arena.cc", "util/arena.cc",
"util/auto_roll_logger.cc", "util/auto_roll_logger.cc",
"util/bloom.cc", "util/bloom.cc",
@ -954,6 +955,11 @@ ROCKS_TESTS = [
"tools/sst_dump_test.cc", "tools/sst_dump_test.cc",
"serial", "serial",
], ],
[
"trace_analyzer_test",
"tools/trace_analyzer_test.cc",
"serial",
],
[ [
"statistics_test", "statistics_test",
"monitoring/statistics_test.cc", "monitoring/statistics_test.cc",

@ -200,45 +200,6 @@ Status RocksDBOptionsParser::ParseStatement(std::string* name,
return Status::OK(); return Status::OK();
} }
namespace {
bool ReadOneLine(std::istringstream* iss, SequentialFile* seq_file,
std::string* output, bool* has_data, Status* result) {
const int kBufferSize = 8192;
char buffer[kBufferSize + 1];
Slice input_slice;
std::string line;
bool has_complete_line = false;
while (!has_complete_line) {
if (std::getline(*iss, line)) {
has_complete_line = !iss->eof();
} else {
has_complete_line = false;
}
if (!has_complete_line) {
// if we're not sure whether we have a complete line,
// further read from the file.
if (*has_data) {
*result = seq_file->Read(kBufferSize, &input_slice, buffer);
}
if (input_slice.size() == 0) {
// meaning we have read all the data
*has_data = false;
break;
} else {
iss->str(line + input_slice.ToString());
// reset the internal state of iss so that we can keep reading it.
iss->clear();
*has_data = (input_slice.size() == kBufferSize);
continue;
}
}
}
*output = line;
return *has_data || has_complete_line;
}
} // namespace
Status RocksDBOptionsParser::Parse(const std::string& file_name, Env* env, Status RocksDBOptionsParser::Parse(const std::string& file_name, Env* env,
bool ignore_unknown_options) { bool ignore_unknown_options) {
Reset(); Reset();

@ -231,6 +231,7 @@ TOOL_LIB_SOURCES = \
tools/ldb_tool.cc \ tools/ldb_tool.cc \
tools/sst_dump_tool.cc \ tools/sst_dump_tool.cc \
utilities/blob_db/blob_dump_tool.cc \ utilities/blob_db/blob_dump_tool.cc \
tools/trace_analyzer_tool.cc \
MOCK_LIB_SOURCES = \ MOCK_LIB_SOURCES = \
table/mock_table.cc \ table/mock_table.cc \
@ -360,6 +361,7 @@ MAIN_SOURCES = \
tools/ldb_cmd_test.cc \ tools/ldb_cmd_test.cc \
tools/reduce_levels_test.cc \ tools/reduce_levels_test.cc \
tools/sst_dump_test.cc \ tools/sst_dump_test.cc \
tools/trace_analyzer_test.cc \
util/arena_test.cc \ util/arena_test.cc \
util/auto_roll_logger_test.cc \ util/auto_roll_logger_test.cc \
util/autovector_test.cc \ util/autovector_test.cc \

@ -0,0 +1,25 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
//
#ifndef ROCKSDB_LITE
#ifndef GFLAGS
#include <cstdio>
int main() {
fprintf(stderr, "Please install gflags to run rocksdb tools\n");
return 1;
}
#else
#include "tools/trace_analyzer_tool.h"
int main(int argc, char** argv) {
return rocksdb::trace_analyzer_tool(argc, argv);
}
#endif
#else
#include <stdio.h>
int main(int /*argc*/, char** /*argv*/) {
fprintf(stderr, "Not supported in lite mode.\n");
return 1;
}
#endif // ROCKSDB_LITE

@ -0,0 +1,689 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
//
// Copyright (c) 2012 The LevelDB Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file. See the AUTHORS file for names of contributors.
#ifndef ROCKSDB_LITE
#ifndef GFLAGS
#include <cstdio>
int main() {
fprintf(stderr, "Please install gflags to run trace_analyzer test\n");
return 1;
}
#else
#include <chrono>
#include <cstdio>
#include <cstdlib>
#include <sstream>
#include <thread>
#include "db/db_test_util.h"
#include "rocksdb/db.h"
#include "rocksdb/env.h"
#include "rocksdb/status.h"
#include "rocksdb/trace_reader_writer.h"
#include "tools/trace_analyzer_tool.h"
#include "util/testharness.h"
#include "util/testutil.h"
#include "util/trace_replay.h"
namespace rocksdb {
namespace {
static const int kMaxArgCount = 100;
static const size_t kArgBufferSize = 100000;
}
// The helper functions for the test
class TraceAnalyzerTest : public testing::Test {
public:
TraceAnalyzerTest() : rnd_(0xFB) {
// test_path_ = test::TmpDir() + "trace_analyzer_test";
test_path_ = test::PerThreadDBPath("trace_analyzer_test");
env_ = rocksdb::Env::Default();
env_->CreateDir(test_path_);
dbname_ = test_path_ + "/db";
}
~TraceAnalyzerTest() {}
void GenerateTrace(std::string trace_path) {
Options options;
options.create_if_missing = true;
options.IncreaseParallelism();
options.OptimizeLevelStyleCompaction();
options.merge_operator = MergeOperators::CreatePutOperator();
ReadOptions ro;
WriteOptions wo;
TraceOptions trace_opt;
DB* db_ = nullptr;
std::string value;
std::unique_ptr<TraceWriter> trace_writer;
ASSERT_OK(
NewFileTraceWriter(env_, env_options_, trace_path, &trace_writer));
ASSERT_OK(DB::Open(options, dbname_, &db_));
ASSERT_OK(db_->StartTrace(trace_opt, std::move(trace_writer)));
WriteBatch batch;
ASSERT_OK(batch.Put("a", "aaaaaaaaa"));
ASSERT_OK(batch.Merge("b", "aaaaaaaaaaaaaaaaaaaa"));
ASSERT_OK(batch.Delete("c"));
ASSERT_OK(batch.SingleDelete("d"));
ASSERT_OK(batch.DeleteRange("e", "f"));
ASSERT_OK(db_->Write(wo, &batch));
ASSERT_OK(db_->Get(ro, "a", &value));
std::this_thread::sleep_for (std::chrono::seconds(1));
db_->Get(ro, "g", &value);
ASSERT_OK(db_->EndTrace());
ASSERT_OK(env_->FileExists(trace_path));
std::unique_ptr<WritableFile> whole_f;
std::string whole_path = test_path_ + "/0.txt";
ASSERT_OK(env_->NewWritableFile(whole_path, &whole_f, env_options_));
std::string whole_str = "0x61\n0x62\n0x63\n0x64\n0x65\n0x66\n";
ASSERT_OK(whole_f->Append(whole_str));
delete db_;
ASSERT_OK(DestroyDB(dbname_, options));
}
void RunTraceAnalyzer(const std::vector<std::string>& args) {
char arg_buffer[kArgBufferSize];
char* argv[kMaxArgCount];
int argc = 0;
int cursor = 0;
for (const auto& arg : args) {
ASSERT_LE(cursor + arg.size() + 1, kArgBufferSize);
ASSERT_LE(argc + 1, kMaxArgCount);
snprintf(arg_buffer + cursor, arg.size() + 1, "%s", arg.c_str());
argv[argc++] = arg_buffer + cursor;
cursor += static_cast<int>(arg.size()) + 1;
}
ASSERT_EQ(0, rocksdb::trace_analyzer_tool(argc, argv));
}
void CheckFileContent(const std::vector<std::string>& cnt,
std::string file_path, bool full_content) {
ASSERT_OK(env_->FileExists(file_path));
std::unique_ptr<SequentialFile> f_ptr;
ASSERT_OK(env_->NewSequentialFile(file_path, &f_ptr, env_options_));
std::string get_line;
std::istringstream iss;
bool has_data = true;
std::vector<std::string> result;
uint32_t count;
Status s;
for (count = 0; ReadOneLine(&iss, f_ptr.get(), &get_line, &has_data, &s);
++count) {
ASSERT_OK(s);
result.push_back(get_line);
}
ASSERT_EQ(cnt.size(), result.size());
for (int i = 0; i < static_cast<int>(result.size()); i++) {
if (full_content) {
ASSERT_EQ(result[i], cnt[i]);
} else {
ASSERT_EQ(result[i][0], cnt[i][0]);
}
}
return;
}
rocksdb::Env* env_;
EnvOptions env_options_;
std::string test_path_;
std::string dbname_;
Random rnd_;
};
TEST_F(TraceAnalyzerTest, Get) {
std::string trace_path = test_path_ + "/trace";
std::string output_path = test_path_ + "/get";
std::string file_path;
std::vector<std::string> paras = {"./trace_analyzer",
"-analyze_get",
"-convert_to_human_readable_trace",
"-output_key_stats",
"-output_access_count_stats",
"-output_prefix=test",
"-output_prefix_cut=1",
"-output_time_series",
"-output_value_distribution",
"-output_qps_stats",
"-no_key",
"-no_print"};
Status s = env_->FileExists(trace_path);
if (!s.ok()) {
GenerateTrace(trace_path);
}
paras.push_back("-output_dir=" + output_path);
paras.push_back("-trace_path=" + trace_path);
paras.push_back("-key_space_dir=" + test_path_);
env_->CreateDir(output_path);
RunTraceAnalyzer(paras);
// check the key_stats file
std::vector<std::string> k_stats = {"0 10 0 1 1.000000", "0 10 1 1 1.000000"};
file_path = output_path + "/test-get-0-accessed_key_stats.txt";
CheckFileContent(k_stats, file_path, true);
// Check the access count distribution
std::vector<std::string> k_dist = {"access_count: 1 num: 2"};
file_path = output_path + "/test-get-0-accessed_key_count_distribution.txt";
CheckFileContent(k_dist, file_path, true);
// Check the trace sequence
std::vector<std::string> k_sequence = {"1", "5", "2", "3", "4", "0", "0"};
file_path = output_path + "/test-human_readable_trace.txt";
CheckFileContent(k_sequence, file_path, false);
// Check the prefix
std::vector<std::string> k_prefix = {"0 0 0 0.000000 0.000000 0x30",
"1 1 1 1.000000 1.000000 0x61"};
file_path = output_path + "/test-get-0-accessed_key_prefix_cut.txt";
CheckFileContent(k_prefix, file_path, true);
// Check the time series
std::vector<std::string> k_series = {"0 1533000630 0", "0 1533000630 1"};
file_path = output_path + "/test-get-0-time_series.txt";
CheckFileContent(k_series, file_path, false);
// Check the accessed key in whole key space
std::vector<std::string> k_whole_access = {"0 1"};
file_path = output_path + "/test-get-0-whole_key_stats.txt";
CheckFileContent(k_whole_access, file_path, true);
// Check the whole key prefix cut
std::vector<std::string> k_whole_prefix = {"0 0x61", "1 0x62", "2 0x63",
"3 0x64", "4 0x65", "5 0x66"};
file_path = output_path + "/test-get-0-whole_key_prefix_cut.txt";
CheckFileContent(k_whole_prefix, file_path, true);
// Check the overall qps
std::vector<std::string> all_qps = {"1 0 0 0 0 0 0 0 1"};
file_path = output_path + "/test-qps_stats.txt";
CheckFileContent(all_qps, file_path, true);
// Check the qps of get
std::vector<std::string> get_qps = {"1"};
file_path = output_path + "/test-get-0-qps_stats.txt";
CheckFileContent(get_qps, file_path, true);
// Check the top k qps prefix cut
std::vector<std::string> top_qps = {"At time: 0 with QPS: 1",
"The prefix: 0x61 Access count: 1"};
file_path = output_path + "/test-get-0-accessed_top_k_qps_prefix_cut.txt";
CheckFileContent(top_qps, file_path, true);
}
// Test analyzing of Put
TEST_F(TraceAnalyzerTest, Put) {
std::string trace_path = test_path_ + "/trace";
std::string output_path = test_path_ + "/put";
std::string file_path;
std::vector<std::string> paras = {"./trace_analyzer",
"-analyze_get",
"-analyze_put",
"-convert_to_human_readable_trace",
"-output_key_stats",
"-output_access_count_stats",
"-output_prefix=test",
"-output_prefix_cut=1",
"-output_time_series",
"-output_value_distribution",
"-output_qps_stats",
"-no_key",
"-no_print"};
Status s = env_->FileExists(trace_path);
if (!s.ok()) {
GenerateTrace(trace_path);
}
paras.push_back("-output_dir=" + output_path);
paras.push_back("-trace_path=" + trace_path);
paras.push_back("-key_space_dir=" + test_path_);
env_->CreateDir(output_path);
RunTraceAnalyzer(paras);
// check the key_stats file
std::vector<std::string> k_stats = {"0 9 0 1 1.000000"};
file_path = output_path + "/test-put-0-accessed_key_stats.txt";
CheckFileContent(k_stats, file_path, true);
// Check the access count distribution
std::vector<std::string> k_dist = {"access_count: 1 num: 1"};
file_path = output_path + "/test-put-0-accessed_key_count_distribution.txt";
CheckFileContent(k_dist, file_path, true);
// Check the trace sequence
std::vector<std::string> k_sequence = {"1", "5", "2", "3", "4", "0", "0"};
file_path = output_path + "/test-human_readable_trace.txt";
CheckFileContent(k_sequence, file_path, false);
// Check the prefix
std::vector<std::string> k_prefix = {"0 0 0 0.000000 0.000000 0x30"};
file_path = output_path + "/test-put-0-accessed_key_prefix_cut.txt";
CheckFileContent(k_prefix, file_path, true);
// Check the time series
std::vector<std::string> k_series = {"1 1533056278 0"};
file_path = output_path + "/test-put-0-time_series.txt";
CheckFileContent(k_series, file_path, false);
// Check the accessed key in whole key space
std::vector<std::string> k_whole_access = {"0 1"};
file_path = output_path + "/test-put-0-whole_key_stats.txt";
CheckFileContent(k_whole_access, file_path, true);
// Check the whole key prefix cut
std::vector<std::string> k_whole_prefix = {"0 0x61", "1 0x62", "2 0x63",
"3 0x64", "4 0x65", "5 0x66"};
file_path = output_path + "/test-put-0-whole_key_prefix_cut.txt";
CheckFileContent(k_whole_prefix, file_path, true);
// Check the overall qps
std::vector<std::string> all_qps = {"1 1 0 0 0 0 0 0 2"};
file_path = output_path + "/test-qps_stats.txt";
CheckFileContent(all_qps, file_path, true);
// Check the qps of get
std::vector<std::string> get_qps = {"1"};
file_path = output_path + "/test-put-0-qps_stats.txt";
CheckFileContent(get_qps, file_path, true);
// Check the top k qps prefix cut
std::vector<std::string> top_qps = {"At time: 0 with QPS: 1",
"The prefix: 0x61 Access count: 1"};
file_path = output_path + "/test-put-0-accessed_top_k_qps_prefix_cut.txt";
CheckFileContent(top_qps, file_path, true);
// Check the value size distribution
std::vector<std::string> value_dist = {
"Number_of_value_size_between 0 and 16 is: 1"};
file_path = output_path + "/test-put-0-accessed_value_size_distribution.txt";
CheckFileContent(value_dist, file_path, true);
}
// Test analyzing of delete
TEST_F(TraceAnalyzerTest, Delete) {
std::string trace_path = test_path_ + "/trace";
std::string output_path = test_path_ + "/delete";
std::string file_path;
std::vector<std::string> paras = {"./trace_analyzer",
"-analyze_get",
"-analyze_put",
"-analyze_delete",
"-convert_to_human_readable_trace",
"-output_key_stats",
"-output_access_count_stats",
"-output_prefix=test",
"-output_prefix_cut=1",
"-output_time_series",
"-output_value_distribution",
"-output_qps_stats",
"-no_key",
"-no_print"};
Status s = env_->FileExists(trace_path);
if (!s.ok()) {
GenerateTrace(trace_path);
}
paras.push_back("-output_dir=" + output_path);
paras.push_back("-trace_path=" + trace_path);
paras.push_back("-key_space_dir=" + test_path_);
env_->CreateDir(output_path);
RunTraceAnalyzer(paras);
// check the key_stats file
std::vector<std::string> k_stats = {"0 0 0 1 1.000000"};
file_path = output_path + "/test-delete-0-accessed_key_stats.txt";
CheckFileContent(k_stats, file_path, true);
// Check the access count distribution
std::vector<std::string> k_dist = {"access_count: 1 num: 1"};
file_path =
output_path + "/test-delete-0-accessed_key_count_distribution.txt";
CheckFileContent(k_dist, file_path, true);
// Check the trace sequence
std::vector<std::string> k_sequence = {"1", "5", "2", "3", "4", "0", "0"};
file_path = output_path + "/test-human_readable_trace.txt";
CheckFileContent(k_sequence, file_path, false);
// Check the prefix
std::vector<std::string> k_prefix = {"0 0 0 0.000000 0.000000 0x30"};
file_path = output_path + "/test-delete-0-accessed_key_prefix_cut.txt";
CheckFileContent(k_prefix, file_path, true);
// Check the time series
std::vector<std::string> k_series = {"2 1533000630 0"};
file_path = output_path + "/test-delete-0-time_series.txt";
CheckFileContent(k_series, file_path, false);
// Check the accessed key in whole key space
std::vector<std::string> k_whole_access = {"2 1"};
file_path = output_path + "/test-delete-0-whole_key_stats.txt";
CheckFileContent(k_whole_access, file_path, true);
// Check the whole key prefix cut
std::vector<std::string> k_whole_prefix = {"0 0x61", "1 0x62", "2 0x63",
"3 0x64", "4 0x65", "5 0x66"};
file_path = output_path + "/test-delete-0-whole_key_prefix_cut.txt";
CheckFileContent(k_whole_prefix, file_path, true);
// Check the overall qps
std::vector<std::string> all_qps = {"1 1 1 0 0 0 0 0 3"};
file_path = output_path + "/test-qps_stats.txt";
CheckFileContent(all_qps, file_path, true);
// Check the qps of get
std::vector<std::string> get_qps = {"1"};
file_path = output_path + "/test-delete-0-qps_stats.txt";
CheckFileContent(get_qps, file_path, true);
// Check the top k qps prefix cut
std::vector<std::string> top_qps = {"At time: 0 with QPS: 1",
"The prefix: 0x63 Access count: 1"};
file_path = output_path + "/test-delete-0-accessed_top_k_qps_prefix_cut.txt";
CheckFileContent(top_qps, file_path, true);
}
// Test analyzing of Merge
TEST_F(TraceAnalyzerTest, Merge) {
std::string trace_path = test_path_ + "/trace";
std::string output_path = test_path_ + "/merge";
std::string file_path;
std::vector<std::string> paras = {"./trace_analyzer",
"-analyze_get",
"-analyze_put",
"-analyze_delete",
"-analyze_merge",
"-convert_to_human_readable_trace",
"-output_key_stats",
"-output_access_count_stats",
"-output_prefix=test",
"-output_prefix_cut=1",
"-output_time_series",
"-output_value_distribution",
"-output_qps_stats",
"-no_key",
"-no_print"};
Status s = env_->FileExists(trace_path);
if (!s.ok()) {
GenerateTrace(trace_path);
}
paras.push_back("-output_dir=" + output_path);
paras.push_back("-trace_path=" + trace_path);
paras.push_back("-key_space_dir=" + test_path_);
env_->CreateDir(output_path);
RunTraceAnalyzer(paras);
// check the key_stats file
std::vector<std::string> k_stats = {"0 20 0 1 1.000000"};
file_path = output_path + "/test-merge-0-accessed_key_stats.txt";
CheckFileContent(k_stats, file_path, true);
// Check the access count distribution
std::vector<std::string> k_dist = {"access_count: 1 num: 1"};
file_path = output_path + "/test-merge-0-accessed_key_count_distribution.txt";
CheckFileContent(k_dist, file_path, true);
// Check the trace sequence
std::vector<std::string> k_sequence = {"1", "5", "2", "3", "4", "0", "0"};
file_path = output_path + "/test-human_readable_trace.txt";
CheckFileContent(k_sequence, file_path, false);
// Check the prefix
std::vector<std::string> k_prefix = {"0 0 0 0.000000 0.000000 0x30"};
file_path = output_path + "/test-merge-0-accessed_key_prefix_cut.txt";
CheckFileContent(k_prefix, file_path, true);
// Check the time series
std::vector<std::string> k_series = {"5 1533000630 0"};
file_path = output_path + "/test-merge-0-time_series.txt";
CheckFileContent(k_series, file_path, false);
// Check the accessed key in whole key space
std::vector<std::string> k_whole_access = {"1 1"};
file_path = output_path + "/test-merge-0-whole_key_stats.txt";
CheckFileContent(k_whole_access, file_path, true);
// Check the whole key prefix cut
std::vector<std::string> k_whole_prefix = {"0 0x61", "1 0x62", "2 0x63",
"3 0x64", "4 0x65", "5 0x66"};
file_path = output_path + "/test-merge-0-whole_key_prefix_cut.txt";
CheckFileContent(k_whole_prefix, file_path, true);
// Check the overall qps
std::vector<std::string> all_qps = {"1 1 1 0 0 1 0 0 4"};
file_path = output_path + "/test-qps_stats.txt";
CheckFileContent(all_qps, file_path, true);
// Check the qps of get
std::vector<std::string> get_qps = {"1"};
file_path = output_path + "/test-merge-0-qps_stats.txt";
CheckFileContent(get_qps, file_path, true);
// Check the top k qps prefix cut
std::vector<std::string> top_qps = {"At time: 0 with QPS: 1",
"The prefix: 0x62 Access count: 1"};
file_path = output_path + "/test-merge-0-accessed_top_k_qps_prefix_cut.txt";
CheckFileContent(top_qps, file_path, true);
// Check the value size distribution
std::vector<std::string> value_dist = {
"Number_of_value_size_between 0 and 24 is: 1"};
file_path =
output_path + "/test-merge-0-accessed_value_size_distribution.txt";
CheckFileContent(value_dist, file_path, true);
}
// Test analyzing of SingleDelete
TEST_F(TraceAnalyzerTest, SingleDelete) {
std::string trace_path = test_path_ + "/trace";
std::string output_path = test_path_ + "/single_delete";
std::string file_path;
std::vector<std::string> paras = {"./trace_analyzer",
"-analyze_get",
"-analyze_put",
"-analyze_delete",
"-analyze_merge",
"-analyze_single_delete",
"-convert_to_human_readable_trace",
"-output_key_stats",
"-output_access_count_stats",
"-output_prefix=test",
"-output_prefix_cut=1",
"-output_time_series",
"-output_value_distribution",
"-output_qps_stats",
"-no_key",
"-no_print"};
Status s = env_->FileExists(trace_path);
if (!s.ok()) {
GenerateTrace(trace_path);
}
paras.push_back("-output_dir=" + output_path);
paras.push_back("-trace_path=" + trace_path);
paras.push_back("-key_space_dir=" + test_path_);
env_->CreateDir(output_path);
RunTraceAnalyzer(paras);
// check the key_stats file
std::vector<std::string> k_stats = {"0 0 0 1 1.000000"};
file_path = output_path + "/test-single_delete-0-accessed_key_stats.txt";
CheckFileContent(k_stats, file_path, true);
// Check the access count distribution
std::vector<std::string> k_dist = {"access_count: 1 num: 1"};
file_path =
output_path + "/test-single_delete-0-accessed_key_count_distribution.txt";
CheckFileContent(k_dist, file_path, true);
// Check the trace sequence
std::vector<std::string> k_sequence = {"1", "5", "2", "3", "4", "0", "0"};
file_path = output_path + "/test-human_readable_trace.txt";
CheckFileContent(k_sequence, file_path, false);
// Check the prefix
std::vector<std::string> k_prefix = {"0 0 0 0.000000 0.000000 0x30"};
file_path = output_path + "/test-single_delete-0-accessed_key_prefix_cut.txt";
CheckFileContent(k_prefix, file_path, true);
// Check the time series
std::vector<std::string> k_series = {"3 1533000630 0"};
file_path = output_path + "/test-single_delete-0-time_series.txt";
CheckFileContent(k_series, file_path, false);
// Check the accessed key in whole key space
std::vector<std::string> k_whole_access = {"3 1"};
file_path = output_path + "/test-single_delete-0-whole_key_stats.txt";
CheckFileContent(k_whole_access, file_path, true);
// Check the whole key prefix cut
std::vector<std::string> k_whole_prefix = {"0 0x61", "1 0x62", "2 0x63",
"3 0x64", "4 0x65", "5 0x66"};
file_path = output_path + "/test-single_delete-0-whole_key_prefix_cut.txt";
CheckFileContent(k_whole_prefix, file_path, true);
// Check the overall qps
std::vector<std::string> all_qps = {"1 1 1 1 0 1 0 0 5"};
file_path = output_path + "/test-qps_stats.txt";
CheckFileContent(all_qps, file_path, true);
// Check the qps of get
std::vector<std::string> get_qps = {"1"};
file_path = output_path + "/test-single_delete-0-qps_stats.txt";
CheckFileContent(get_qps, file_path, true);
// Check the top k qps prefix cut
std::vector<std::string> top_qps = {"At time: 0 with QPS: 1",
"The prefix: 0x64 Access count: 1"};
file_path =
output_path + "/test-single_delete-0-accessed_top_k_qps_prefix_cut.txt";
CheckFileContent(top_qps, file_path, true);
}
// Test analyzing of delete
TEST_F(TraceAnalyzerTest, DeleteRange) {
std::string trace_path = test_path_ + "/trace";
std::string output_path = test_path_ + "/range_delete";
std::string file_path;
std::vector<std::string> paras = {"./trace_analyzer",
"-analyze_get",
"-analyze_put",
"-analyze_delete",
"-analyze_merge",
"-analyze_single_delete",
"-analyze_range_delete",
"-convert_to_human_readable_trace",
"-output_key_stats",
"-output_access_count_stats",
"-output_prefix=test",
"-output_prefix_cut=1",
"-output_time_series",
"-output_value_distribution",
"-output_qps_stats",
"-no_key",
"-no_print"};
Status s = env_->FileExists(trace_path);
if (!s.ok()) {
GenerateTrace(trace_path);
}
paras.push_back("-output_dir=" + output_path);
paras.push_back("-trace_path=" + trace_path);
paras.push_back("-key_space_dir=" + test_path_);
env_->CreateDir(output_path);
RunTraceAnalyzer(paras);
// check the key_stats file
std::vector<std::string> k_stats = {"0 0 0 1 1.000000", "0 0 1 1 1.000000"};
file_path = output_path + "/test-range_delete-0-accessed_key_stats.txt";
CheckFileContent(k_stats, file_path, true);
// Check the access count distribution
std::vector<std::string> k_dist = {"access_count: 1 num: 2"};
file_path =
output_path + "/test-range_delete-0-accessed_key_count_distribution.txt";
CheckFileContent(k_dist, file_path, true);
// Check the trace sequence
std::vector<std::string> k_sequence = {"1", "5", "2", "3", "4", "0", "0"};
file_path = output_path + "/test-human_readable_trace.txt";
CheckFileContent(k_sequence, file_path, false);
// Check the prefix
std::vector<std::string> k_prefix = {"0 0 0 0.000000 0.000000 0x30",
"1 1 1 1.000000 1.000000 0x65"};
file_path = output_path + "/test-range_delete-0-accessed_key_prefix_cut.txt";
CheckFileContent(k_prefix, file_path, true);
// Check the time series
std::vector<std::string> k_series = {"4 1533000630 0", "4 1533060100 1"};
file_path = output_path + "/test-range_delete-0-time_series.txt";
CheckFileContent(k_series, file_path, false);
// Check the accessed key in whole key space
std::vector<std::string> k_whole_access = {"4 1", "5 1"};
file_path = output_path + "/test-range_delete-0-whole_key_stats.txt";
CheckFileContent(k_whole_access, file_path, true);
// Check the whole key prefix cut
std::vector<std::string> k_whole_prefix = {"0 0x61", "1 0x62", "2 0x63",
"3 0x64", "4 0x65", "5 0x66"};
file_path = output_path + "/test-range_delete-0-whole_key_prefix_cut.txt";
CheckFileContent(k_whole_prefix, file_path, true);
// Check the overall qps
std::vector<std::string> all_qps = {"1 1 1 1 2 1 0 0 7"};
file_path = output_path + "/test-qps_stats.txt";
CheckFileContent(all_qps, file_path, true);
// Check the qps of get
std::vector<std::string> get_qps = {"2"};
file_path = output_path + "/test-range_delete-0-qps_stats.txt";
CheckFileContent(get_qps, file_path, true);
// Check the top k qps prefix cut
std::vector<std::string> top_qps = {"At time: 0 with QPS: 2",
"The prefix: 0x65 Access count: 1",
"The prefix: 0x66 Access count: 1"};
file_path =
output_path + "/test-range_delete-0-accessed_top_k_qps_prefix_cut.txt";
CheckFileContent(top_qps, file_path, true);
}
} // namespace rocksdb
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
#endif // GFLAG
#else
#include <stdio.h>
int main(int /*argc*/, char** /*argv*/) {
fprintf(stderr, "Trace_analyzer test is not supported in ROCKSDB_LITE\n");
return 0;
}
#endif // !ROCKSDB_LITE return RUN_ALL_TESTS();

File diff suppressed because it is too large Load Diff

@ -0,0 +1,271 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#pragma once
#ifndef ROCKSDB_LITE
#include <list>
#include <map>
#include <queue>
#include <set>
#include <utility>
#include <vector>
#include "rocksdb/env.h"
#include "rocksdb/trace_reader_writer.h"
#include "rocksdb/write_batch.h"
#include "util/trace_replay.h"
namespace rocksdb {
class DBImpl;
class WriteBatch;
enum TraceOperationType : int {
kGet = 0,
kPut = 1,
kDelete = 2,
kSingleDelete = 3,
kRangeDelete = 4,
kMerge = 5,
kIteratorSeek = 6,
kIteratorSeekForPrev = 7,
kTaTypeNum = 8
};
struct TraceUnit {
uint64_t ts;
uint32_t type;
uint32_t cf_id;
size_t value_size;
std::string key;
};
struct TypeCorrelation {
uint64_t count;
uint64_t total_ts;
};
struct StatsUnit {
uint64_t key_id;
uint64_t access_count;
uint64_t latest_ts;
uint64_t succ_count; // current only used to count Get if key found
uint32_t cf_id;
size_t value_size;
std::vector<TypeCorrelation> v_correlation;
};
class AnalyzerOptions {
public:
std::vector<std::vector<int>> correlation_map;
std::vector<std::pair<int, int>> correlation_list;
AnalyzerOptions();
~AnalyzerOptions();
void SparseCorrelationInput(const std::string& in_str);
};
// Note that, for the variable names in the trace_analyzer,
// Starting with 'a_' means the variable is used for 'accessed_keys'.
// Starting with 'w_' means it is used for 'the whole key space'.
// Ending with '_f' means a file write or reader pointer.
// For example, 'a_count' means 'accessed_keys_count',
// 'w_key_f' means 'whole_key_space_file'.
struct TraceStats {
uint32_t cf_id;
std::string cf_name;
uint64_t a_count;
uint64_t a_succ_count;
uint64_t a_key_id;
uint64_t a_key_size_sqsum;
uint64_t a_key_size_sum;
uint64_t a_key_mid;
uint64_t a_value_size_sqsum;
uint64_t a_value_size_sum;
uint64_t a_value_mid;
uint32_t a_peak_qps;
double a_ave_qps;
std::map<std::string, StatsUnit> a_key_stats;
std::map<uint64_t, uint64_t> a_count_stats;
std::map<uint64_t, uint64_t> a_key_size_stats;
std::map<uint64_t, uint64_t> a_value_size_stats;
std::map<uint32_t, uint32_t> a_qps_stats;
std::map<uint32_t, std::map<std::string, uint32_t>> a_qps_prefix_stats;
std::priority_queue<std::pair<uint64_t, std::string>,
std::vector<std::pair<uint64_t, std::string>>,
std::greater<std::pair<uint64_t, std::string>>>
top_k_queue;
std::priority_queue<std::pair<uint64_t, std::string>,
std::vector<std::pair<uint64_t, std::string>>,
std::greater<std::pair<uint64_t, std::string>>>
top_k_prefix_access;
std::priority_queue<std::pair<double, std::string>,
std::vector<std::pair<double, std::string>>,
std::greater<std::pair<double, std::string>>>
top_k_prefix_ave;
std::priority_queue<std::pair<uint32_t, uint32_t>,
std::vector<std::pair<uint32_t, uint32_t>>,
std::greater<std::pair<uint32_t, uint32_t>>>
top_k_qps_sec;
std::list<TraceUnit> time_series;
std::vector<std::pair<uint64_t, uint64_t>> correlation_output;
std::unique_ptr<rocksdb::WritableFile> time_series_f;
std::unique_ptr<rocksdb::WritableFile> a_key_f;
std::unique_ptr<rocksdb::WritableFile> a_count_dist_f;
std::unique_ptr<rocksdb::WritableFile> a_prefix_cut_f;
std::unique_ptr<rocksdb::WritableFile> a_value_size_f;
std::unique_ptr<rocksdb::WritableFile> a_qps_f;
std::unique_ptr<rocksdb::WritableFile> a_top_qps_prefix_f;
std::unique_ptr<rocksdb::WritableFile> w_key_f;
std::unique_ptr<rocksdb::WritableFile> w_prefix_cut_f;
TraceStats();
~TraceStats();
};
struct TypeUnit {
std::string type_name;
bool enabled;
uint64_t total_keys;
uint64_t total_access;
uint64_t total_succ_access;
std::map<uint32_t, TraceStats> stats;
};
struct CfUnit {
uint32_t cf_id;
uint64_t w_count; // total keys in this cf if we use the whole key space
uint64_t a_count; // the total keys in this cf that are accessed
std::map<uint64_t, uint64_t> w_key_size_stats; // whole key space key size
// statistic this cf
};
class TraceAnalyzer {
public:
TraceAnalyzer(std::string& trace_path, std::string& output_path,
AnalyzerOptions _analyzer_opts);
~TraceAnalyzer();
Status PrepareProcessing();
Status StartProcessing();
Status MakeStatistics();
Status ReProcessing();
Status EndProcessing();
Status WriteTraceUnit(TraceUnit& unit);
// The trace processing functions for different type
Status HandleGet(uint32_t column_family_id, const std::string& key,
const uint64_t& ts, const uint32_t& get_ret);
Status HandlePut(uint32_t column_family_id, const Slice& key,
const Slice& value);
Status HandleDelete(uint32_t column_family_id, const Slice& key);
Status HandleSingleDelete(uint32_t column_family_id, const Slice& key);
Status HandleDeleteRange(uint32_t column_family_id, const Slice& begin_key,
const Slice& end_key);
Status HandleMerge(uint32_t column_family_id, const Slice& key,
const Slice& value);
Status HandleIter(uint32_t column_family_id, const std::string& key,
const uint64_t& ts, TraceType& trace_type);
std::vector<TypeUnit>& GetTaVector() { return ta_; }
private:
rocksdb::Env* env_;
EnvOptions env_options_;
std::unique_ptr<TraceReader> trace_reader_;
size_t offset_;
char buffer_[1024];
uint64_t c_time_;
std::string trace_name_;
std::string output_path_;
AnalyzerOptions analyzer_opts_;
uint64_t total_requests_;
uint64_t total_access_keys_;
uint64_t total_gets_;
uint64_t total_writes_;
uint64_t begin_time_;
uint64_t end_time_;
uint64_t time_series_start_;
std::unique_ptr<rocksdb::WritableFile> trace_sequence_f_; // readable trace
std::unique_ptr<rocksdb::WritableFile> qps_f_; // overall qps
std::unique_ptr<rocksdb::SequentialFile> wkey_input_f_;
std::vector<TypeUnit> ta_; // The main statistic collecting data structure
std::map<uint32_t, CfUnit> cfs_; // All the cf_id appears in this trace;
std::vector<uint32_t> qps_peak_;
std::vector<double> qps_ave_;
Status ReadTraceHeader(Trace* header);
Status ReadTraceFooter(Trace* footer);
Status ReadTraceRecord(Trace* trace);
Status KeyStatsInsertion(const uint32_t& type, const uint32_t& cf_id,
const std::string& key, const size_t value_size,
const uint64_t ts);
Status StatsUnitCorrelationUpdate(StatsUnit& unit, const uint32_t& type,
const uint64_t& ts, const std::string& key);
Status OpenStatsOutputFiles(const std::string& type, TraceStats& new_stats);
Status CreateOutputFile(const std::string& type, const std::string& cf_name,
const std::string& ending,
std::unique_ptr<rocksdb::WritableFile>* f_ptr);
void CloseOutputFiles();
void PrintStatistics();
Status TraceUnitWriter(std::unique_ptr<rocksdb::WritableFile>& f_ptr,
TraceUnit& unit);
Status WriteTraceSequence(const uint32_t& type, const uint32_t& cf_id,
const std::string& key, const size_t value_size,
const uint64_t ts);
Status MakeStatisticKeyStatsOrPrefix(TraceStats& stats);
Status MakeStatisticCorrelation(TraceStats& stats, StatsUnit& unit);
Status MakeStatisticQPS();
};
// write bach handler to be used for WriteBache iterator
// when processing the write trace
class TraceWriteHandler : public WriteBatch::Handler {
public:
TraceWriteHandler() { ta_ptr = nullptr; }
explicit TraceWriteHandler(TraceAnalyzer* _ta_ptr) { ta_ptr = _ta_ptr; }
~TraceWriteHandler() {}
virtual Status PutCF(uint32_t column_family_id, const Slice& key,
const Slice& value) override {
return ta_ptr->HandlePut(column_family_id, key, value);
}
virtual Status DeleteCF(uint32_t column_family_id,
const Slice& key) override {
return ta_ptr->HandleDelete(column_family_id, key);
}
virtual Status SingleDeleteCF(uint32_t column_family_id,
const Slice& key) override {
return ta_ptr->HandleSingleDelete(column_family_id, key);
}
virtual Status DeleteRangeCF(uint32_t column_family_id,
const Slice& begin_key,
const Slice& end_key) override {
return ta_ptr->HandleDeleteRange(column_family_id, begin_key, end_key);
}
virtual Status MergeCF(uint32_t column_family_id, const Slice& key,
const Slice& value) override {
return ta_ptr->HandleMerge(column_family_id, key, value);
}
private:
TraceAnalyzer* ta_ptr;
};
int trace_analyzer_tool(int argc, char** argv);
} // namespace rocksdb
#endif // ROCKSDB_LITE

@ -760,4 +760,41 @@ Status NewWritableFile(Env* env, const std::string& fname,
return s; return s;
} }
bool ReadOneLine(std::istringstream* iss, SequentialFile* seq_file,
std::string* output, bool* has_data, Status* result) {
const int kBufferSize = 8192;
char buffer[kBufferSize + 1];
Slice input_slice;
std::string line;
bool has_complete_line = false;
while (!has_complete_line) {
if (std::getline(*iss, line)) {
has_complete_line = !iss->eof();
} else {
has_complete_line = false;
}
if (!has_complete_line) {
// if we're not sure whether we have a complete line,
// further read from the file.
if (*has_data) {
*result = seq_file->Read(kBufferSize, &input_slice, buffer);
}
if (input_slice.size() == 0) {
// meaning we have read all the data
*has_data = false;
break;
} else {
iss->str(line + input_slice.ToString());
// reset the internal state of iss so that we can keep reading it.
iss->clear();
*has_data = (input_slice.size() == kBufferSize);
continue;
}
}
}
*output = line;
return *has_data || has_complete_line;
}
} // namespace rocksdb } // namespace rocksdb

@ -8,6 +8,7 @@
// found in the LICENSE file. See the AUTHORS file for names of contributors. // found in the LICENSE file. See the AUTHORS file for names of contributors.
#pragma once #pragma once
#include <atomic> #include <atomic>
#include <sstream>
#include <string> #include <string>
#include "port/port.h" #include "port/port.h"
#include "rocksdb/env.h" #include "rocksdb/env.h"
@ -250,4 +251,7 @@ class FilePrefetchBuffer {
extern Status NewWritableFile(Env* env, const std::string& fname, extern Status NewWritableFile(Env* env, const std::string& fname,
unique_ptr<WritableFile>* result, unique_ptr<WritableFile>* result,
const EnvOptions& options); const EnvOptions& options);
bool ReadOneLine(std::istringstream* iss, SequentialFile* seq_file,
std::string* output, bool* has_data, Status* result);
} // namespace rocksdb } // namespace rocksdb

Loading…
Cancel
Save