Summary:
This adds p99.9 and p99.99 response times to the benchmark report and
adds a second report, report2.txt that has tests listed in test order rather
than the time in which they were run, so overwrite tests are listed for
all thread counts, then update etc.
Also changes fillseq to compress all levels to avoid write-amp from rewriting
uncompressed files when they reach the first level to compress.
Increase max_write_buffer_number to avoid stalls during fillseq and make
max_background_flushes agree with max_write_buffer_number.
See https://gist.github.com/mdcallag/297ff4316a25cb2988f7 for an example
of the new report (report2.txt)
Task ID: #
Blame Rev:
Test Plan:
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36537
Summary:
Currently users have no idea a key is add, delete or merge from TablePropertiesCollector call back. Add a new function to add it.
Also refactor the codes so that
(1) make table property collector and internal table property collector two separate data structures with the later one now exposed
(2) table builders only receive internal table properties
Test Plan: Add cases in table_properties_collector_test to cover both of old and new ways of using TablePropertiesCollector.
Reviewers: yhchiang, igor.sugak, rven, igor
Reviewed By: rven, igor
Subscribers: meyering, yoshinorim, maykov, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D35373
Summary:
If accumulated_num_non_deletions_ were ever smaller than
accumulated_num_deletions_, the computation of
"accumulated_num_non_deletions_ - accumulated_num_deletions_"
would result in a logically "negative" value, but since
the two operands are unsigned (uint64_t), the result corresponding
to e.g., -1 would 2^64-1.
Instead, return 0 in that case.
Test Plan:
- ensure "make check" still passes
- temporarily add an "abort();" call in the new "if"-block, and
observe that it fails in some test cases. However, note that
this case is triggered only when the two numbers are equal.
Thus, no test case triggers the erroneous behavior this
change is designed to avoid. If anyone can construct a
scenario in which that bug would be triggered, I'll be
happy to add a test case.
Reviewers: ljin, igor, rven, igor.sugak, yhchiang, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36489
Summary: Int is used for level size targets when options_.level_compaction_dynamic_level_bytes=true, which will cause overflow when database grows big. Fix it.
Test Plan: Add a new unit test which fails without the fix.
Reviewers: rven, yhchiang, MarkCallaghan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba, yoshinorim
Differential Revision: https://reviews.facebook.net/D36453
Summary: In some db_test tests sync points are not cleared which will cause unexpected results in the next tests. Clean them up in test cleaning up.
Test Plan:
Run the same tests that used to fail:
build using USE_CLANG=1 and run
./db_test --gtest_filter="DBTest.CompressLevelCompaction:*DBTestUniversalCompactionParallel*"
Reviewers: rven, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36429
Summary:
This diff fixes a crash found when an empty database is opened in readonly mode.
We now check the number of levels before we open the DB as a compacted DB.
Test Plan: DBTest.EmptyCompactedDB
Reviewers: igor, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36327
Summary:
Fix the make unity build. The local stats variable name was shadowing a
global stats variable.
Test Plan:
Run the build
OPT=-DTRAVIS V=1 make unity
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36285
Summary: After you run `arc diff`, just run `build_tools/trigger_jenkins_test.sh` and Jenkins will test your diff!
Test Plan: Triggered a build to jenkins
Reviewers: sdong, rven, IslamAbdelRahman, anthony, yhchiang, meyering
Reviewed By: meyering
Subscribers: meyering, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36021
Summary:
After recent change of DBTest.DynamicCompactionOptions, occasionally hit another non-deterministic case where L0 showdown is triggered while timeout should not triggered for hard limit.
Fix it by increasing L0 slowdown trigger at the same time.
Test Plan: Run the failed test.
Reviewers: igor, rven
Reviewed By: rven
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36219
Summary:
With this change, we use L1 and up to store compaction outputs in universal compaction.
The compaction pick logic stays the same. Outputs are stored in the largest "level" as possible.
If options.num_levels=1, it behaves all the same as now.
Test Plan:
1) convert most of existing unit tests for universal comapaction to include the option of one level and multiple levels.
2) add a unit test to cover parallel compaction in universal compaction and run it in one level and multiple levels
3) add unit test to migrate from multiple level setting back to one level setting
4) add a unit test to insert keys to trigger multiple rounds of compactions and verify results.
Reviewers: rven, kradhakrishnan, yhchiang, igor
Reviewed By: igor
Subscribers: meyering, leveldb, MarkCallaghan, dhruba
Differential Revision: https://reviews.facebook.net/D34539
Summary:
Just couple of small changes:
1. removed signal_test, since it doesn't seem useful and we don't even run it as part of `make check`
2. moved perf_context_test to TESTS instead of PROGRAMS
3. `make release` probably shouldn't compile benchmarks. We currently rely on `make release` building db_bench (via Jenkins), so I left db_bench there.
This is just a minor cleanup. We need to rethink our targets since they are a bit messy right now. We can do this during our tech debt week.
Test Plan: make release
Reviewers: anthony, rven, yhchiang, sdong, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36171
Summary:
The --stats_interval_seconds determines interval for stats reporting
and overrides --stats_interval when set. I also changed tools/benchmark.sh
to report stats every 60 seconds so I can avoid trying to figure out a
good value for --stats_interval per test and per storage device.
Task ID: #6631621
Blame Rev:
Test Plan:
run tools/run_flash_bench, look at output
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36189
Summary:
Cleaning up log files can do heavy IO, since we call ftruncate() in the destructor. We don't want to call ftruncate() in user threads.
This diff moves cleaning to background threads (flush and compaction)
Test Plan: make check, will also run valgrind
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36177
Summary:
This makes run_flash_bench.sh configurable. Previously it was hardwired for 1B keys and tests
ran for 12 hours each. That kept me from using it. This makes it configuable, adds more tests,
makes the duration per-test configurable and refactors the test scripts.
Adds the seekrandomwhilemerging test to db_bench which is the same as seekrandomwhilewriting except
the writer thread does Merge rather than Put.
Forces the stall-time column in compaction IO stats to use a fixed format (H:M:S) which makes
it easier to scrape and parse. Also adds an option to AppendHumanMicros to force a fixed format.
Sometimes automation and humans want different format.
Calls thread->stats.AddBytes(bytes); in db_bench for more tests to get the MB/sec summary
stats in the output at test end.
Adds the average ingest rate to compaction IO stats. Output now looks like:
https://gist.github.com/mdcallag/2bd64d18be1b93adc494
More information on the benchmark output is at https://gist.github.com/mdcallag/db43a58bd5ac624f01e1
For benchmark.sh changes default RocksDB configuration to reduce stalls:
* min_level_to_compress from 2 to 3
* hard_rate_limit from 2 to 3
* max_grandparent_overlap_factor and max_bytes_for_level_multiplier from 10 to 8
* L0 file count triggers from 4,8,12 to 4,12,20 for (start,stall,stop)
Task ID: #6596829
Blame Rev:
Test Plan:
run tools/run_flash_bench.sh
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36075
Summary: Most of the approach is copied from WebSQL's MySQL branch. It's nice that we can do this without touching core RocksDB code.
Test Plan: Compiles and runs. Didn't test flashback code, as I don't have flashback device and most if it is c/p
Reviewers: MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: rven, lgalanis, kradhakrishnan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35391
Summary:
This fixes two bugs: "make clean" would never remove the generated
file, util/build_version.cc, and since D33591, would be regenerated
only if it were absent.
* Makefile (clean): Remove the generated file.
(util/build_version.cc): Depend on the no-prereq FORCE target,
so that this target's rules are always run.
Since this is a generated file, make it read-only.
Also, be sure to remove the temporary file when it is the same
as the original.
Test Plan:
Ensure that we attempt regeneration every time.
Make it empty with an up-to-date time stamp and demonstrate
that it is rebuilt with the expected content:
$ : > util/build_version.cc
$ make util/build_version.o
GEN util/build_version.cc
GEN util/build_version.d
GEN util/build_version.cc
CC util/build_version.o
$ cat util/build_version.cc
#include "build_version.h"
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:v3.10-2-gb30e72a";
const char* rocksdb_build_git_date = "rocksdb_build_git_date:2015-03-27";
const char* rocksdb_build_compile_date = __DATE__;
Reviewers: igor.sugak, sdong, ljin, igor, rven
Reviewed By: rven
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36087
Summary:
Assign the string properties to const string variables under the
DB::Properties namespace. This helps catch typos during compilation and
also consolidates the property definition in one place.
Test Plan: Run rocksdb unit tests
Reviewers: sdong, yoshinorim, igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35991
Summary:
Whenever we add new tests in db_sanity_test.cc, the verification test
will fail since the old version db_sanity_test.cc does not have the
newly added test. This patch makes auto_sanity_test.sh always use
the db_sanity_test.cc of the newer commit.
As a result, a macro guard is added to allow db_sanity_test.cc to be
backward compatible.
Test Plan: tools/auto_sanity_check.sh
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35997
The error was:
util/logging.cc: In function 'int rocksdb::AppendHumanMicros(uint64_t, char*, int)':
error: util/logging.cc:41:39: integer overflow in expression [-Werror=overflow]
} else if (micros < 1000000l * 60 * 60) {
^
error: util/logging.cc:41:39: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
Summary: It's useful to know if we have compression support or no
Test Plan:
Observed this in my LOG:
2015/03/26-10:34:35.460681 7f5b322b7840 Snappy supported
2015/03/26-10:34:35.460682 7f5b322b7840 Zlib supported
2015/03/26-10:34:35.460686 7f5b322b7840 Bzip supported
2015/03/26-10:34:35.460687 7f5b322b7840 LZ4 NOT supported
Reviewers: sdong, yhchiang
Reviewed By: yhchiang
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35955
[maa@srv2-nskb-devg2 rocksdb-master]$ CXX=/usr/local/CC/gcc-4.7.4/bin/g++ EXTRA_CXXFLAGS=-std=c++11 DISABLE_WARNING_AS_ERROR=1 make db_bench
CC db/db_bench.o
db/db_bench.cc: In member function 'rocksdb::Slice rocksdb::Benchmark::AllocateKey(std::unique_ptr<const char []>*)':
db/db_bench.cc:1434:41: error: use of deleted function 'void std::unique_ptr<_Tp [], _Dp>::reset(_Up) [with _Up = char*; _Tp = const char; _Dp = std::default_delete<const char []>]'
In file included from /usr/local/CC/gcc-4.7.4/lib/gcc/x86_64-unknown-linux-gnu/4.7.4/../../../../include/c++/4.7.4/memory:86:0,
from ./include/rocksdb/db.h:14,
from ./db/dbformat.h:14,
from ./db/db_impl.h:21,
from db/db_bench.cc:33:
Summary:
* Makefile (COMPILE_WITH_TSAN): Avoid a link failure by disabling
-pg when building with TSAN enabled.
Now that "make check" builds all $(PROGRAMS), it is linking
a few programs that were not normally linked before.
For example, this would fail to link with the following diagnostic:
COMPILE_WITH_TSAN=1 make -j40 log_and_apply_bench
CCLD log_and_apply_bench
ld: /usr/lib/../lib64/gcrt1.o: relocation R_X86_64_32S against `__libc_csu_fini' can not be used when making a shared object; recompile with -fPIC
/usr/lib/../lib64/gcrt1.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
Makefile:511: recipe for target 'log_and_apply_bench' failed
make: *** [log_and_apply_bench] Error 1
Since removing -pg is sufficient to get past this link
failure, and no one cares about profiling TSAN-enabled
binaries anyway, we will refrain from linking with -pg
when TSAN testing is enabled. Use a new variable, "pg"
which is set to "-pg" in most cases, but that is made
empty when COMPILE_WITH_TSAN is set.
Test Plan:
Now, this succeeds:
rm -f log_and_apply_bench
COMPILE_WITH_TSAN=1 make -j40 log_and_apply_bench
Reviewers: igor.sugak, rven, sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35943
Summary:
* Makefile (check): Cause "make check" to build all $(PROGRAMS),
so that it verifies that the few benchmark-only source files that
not already built via "make check" do compile and link successfully.
Test Plan:
run "make clean; make check", and verify that
table/table_reader_bench.cc is now compiled.
Before, it was not, which led to an incomplete fix
for a build break.
Reviewers: ljin, igor.sugak, rven, igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35883
Summary: Update HISTORY.md for 3.10.0
Test Plan: no code chagne.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35871
Summary: Fixing issues with get context function.
Test Plan: Run make commit-prereq
Reviewers: sdong, meyering, yhchiang
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35853
Summary: Fix compile error when NROCKSDB_THREAD_STATUS is not used.
Test Plan: make dbg OPT=-DNROCKSDB_THREAD_STATUS -j32
Reviewers: sdong, igor, rven
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35847
Summary:
We have addded new stats and perf_context for measuring the merge and filter operation time consumption.
We have bounded all the merge operations within the GUARD statment and collected the total time for these operations in the DB.
Test Plan: WIP
Reviewers: rven, yhchiang, kradhakrishnan, igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D34377
Summary:
Report elapsed time of a thread operation in micros in ThreadStatus
instead of start time of a thread operation in seconds since the
Epoch, 1970-01-01 00:00:00 (UTC).
Test Plan:
./db_bench --benchmarks=fillrandom --num=100000 --threads=40 \
--max_background_compactions=10 --max_background_flushes=3 \
--thread_status_per_interval=1000 --key_size=16 --value_size=1000 \
--num_column_families=10
Sample Output:
ThreadID ThreadType cfName Operation ElapsedTime Stage State
140667724562496 High Pri column_family_name_000002 Flush 772.419 ms FlushJob::WriteLevel0Table
140667728756800 High Pri default Flush 617.845 ms FlushJob::WriteLevel0Table
140667732951104 High Pri column_family_name_000005 Flush 772.078 ms FlushJob::WriteLevel0Table
140667875557440 Low Pri column_family_name_000008 Compaction 1409.216 ms CompactionJob::Install
140667737145408 Low Pri
140667749728320 Low Pri
140667816837184 Low Pri column_family_name_000007 Compaction 1071.815 ms CompactionJob::ProcessKeyValueCompaction
140667787477056 Low Pri column_family_name_000009 Compaction 772.516 ms CompactionJob::ProcessKeyValueCompaction
140667741339712 Low Pri
140667758116928 Low Pri column_family_name_000004 Compaction 620.739 ms CompactionJob::ProcessKeyValueCompaction
140667753922624 Low Pri
140667842003008 Low Pri column_family_name_000006 Compaction 1260.079 ms CompactionJob::ProcessKeyValueCompaction
140667745534016 Low Pri
Reviewers: sdong, igor, rven
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35769
Summary: There is no alternative to GetLiveFiles() function
Test Plan: none
Reviewers: sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35805
Summary:
Improve ThreadStatusSingleCompaction in two ways:
1. Use SYNC_POINT to ensure compaction won't happen
before the test finishes its "Put Phase" instead of
using sleep.
2. In Put Phase, it continues until we have sufficient
number of L0 files. Note that during the put phase,
there won't be any compaction that consumes L0 files
because of item 1.
Test Plan: ./db_test --gtest_filter="*ThreadStatusSingleCompaction*"
Reviewers: sdong, igor, rven
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35727
Summary:
Limiting verbose printing to "command=scan"
Test Plan:
Run make check and manual testing of sst_dump_test
Reviewers: sdong
CC: leveldb
Task ID: #6575982
Blame Rev: