Summary:
In some places we have NotFound status created with empty message, but it doesn't avoid a malloc. With this patch, the malloc is avoided for that case.
The motivation of it is that I found in db_bench readrandom test when all keys are not existing, about 4% of the total running time is spent on malloc of Status, plus a similar amount of CPU spent on free of them, which is not necessary.
Test Plan: make all check
Reviewers: dhruba, haobo, igor
Reviewed By: haobo
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14691
Summary:
A vector that leverages pre-allocated stack-based array to achieve better
performance for array with small amount of items.
Test Plan:
Added tests for both correctness and performance
Here is the performance benchmark between vector and autovector
Please note that in the test "Creation and Insertion Test", the test case were designed with the motivation described below:
* no element inserted: internal array of std::vector may not really get
initialize.
* one element inserted: internal array of std::vector must have
initialized.
* kSize elements inserted. This shows the most time we'll spend if we
keep everything in stack.
* 2 * kSize elements inserted. The internal vector of
autovector must have been initialized.
Note: kSize is the capacity of autovector
=====================================================
Creation and Insertion Test
=====================================================
created 100000 vectors:
each was inserted with 0 elements
total time elapsed: 128000 (ns)
created 100000 autovectors:
each was inserted with 0 elements
total time elapsed: 3641000 (ns)
created 100000 VectorWithReserveSizes:
each was inserted with 0 elements
total time elapsed: 9896000 (ns)
-----------------------------------
created 100000 vectors:
each was inserted with 1 elements
total time elapsed: 11089000 (ns)
created 100000 autovectors:
each was inserted with 1 elements
total time elapsed: 5008000 (ns)
created 100000 VectorWithReserveSizes:
each was inserted with 1 elements
total time elapsed: 24271000 (ns)
-----------------------------------
created 100000 vectors:
each was inserted with 4 elements
total time elapsed: 39369000 (ns)
created 100000 autovectors:
each was inserted with 4 elements
total time elapsed: 10121000 (ns)
created 100000 VectorWithReserveSizes:
each was inserted with 4 elements
total time elapsed: 28473000 (ns)
-----------------------------------
created 100000 vectors:
each was inserted with 8 elements
total time elapsed: 75013000 (ns)
created 100000 autovectors:
each was inserted with 8 elements
total time elapsed: 18237000 (ns)
created 100000 VectorWithReserveSizes:
each was inserted with 8 elements
total time elapsed: 42464000 (ns)
-----------------------------------
created 100000 vectors:
each was inserted with 16 elements
total time elapsed: 102319000 (ns)
created 100000 autovectors:
each was inserted with 16 elements
total time elapsed: 76724000 (ns)
created 100000 VectorWithReserveSizes:
each was inserted with 16 elements
total time elapsed: 68285000 (ns)
-----------------------------------
=====================================================
Sequence Access Test
=====================================================
performed 100000 sequence access against vector
size: 4
total time elapsed: 198000 (ns)
performed 100000 sequence access against autovector
size: 4
total time elapsed: 306000 (ns)
-----------------------------------
performed 100000 sequence access against vector
size: 8
total time elapsed: 565000 (ns)
performed 100000 sequence access against autovector
size: 8
total time elapsed: 512000 (ns)
-----------------------------------
performed 100000 sequence access against vector
size: 16
total time elapsed: 1076000 (ns)
performed 100000 sequence access against autovector
size: 16
total time elapsed: 1070000 (ns)
-----------------------------------
Reviewers: dhruba, haobo, sdong, chip
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14655
Summary:
Instead of locking and saving a DB state, we can cache a DB state and update it only when it changes. This change reduces lock contention and speeds up read operations on the DB.
Performance improvements are substantial, although there is some cost in no-read workloads. I ran the regression tests on my devserver and here are the numbers:
overwrite 56345 -> 63001
fillseq 193730 -> 185296
readrandom 771301 -> 1219803 (58% improvement!)
readrandom_smallblockcache 677609 -> 862850
readrandom_memtable_sst 710440 -> 1109223
readrandom_fillunique_random 221589 -> 247869
memtablefillrandom 105286 -> 92643
memtablereadrandom 763033 -> 1288862
Test Plan:
make asan_check
I am also running db_stress
Reviewers: dhruba, haobo, sdong, kailiu
Reviewed By: haobo
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14679
Summary:
For some tests I want to cache the database prior to running other tests on the same invocation
of db_bench. The readtocache test ignores --threads and --reads so those can be used by other tests
and it will still do a full read of --num rows with one thread. It might be invoked like:
db_bench --benchmarks=readtocache,readrandom --reads 100 --num 10000 --threads 8
Task ID: #
Blame Rev:
Test Plan:
run db_bench
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14739
Summary:
db_test should be the first to execute because it finds the most bugs.
Also, when third parties report issues, we don't want ldb error message, we prefer to have db_test error message. For example, see thread: https://github.com/facebook/rocksdb/issues/25
Test Plan: make check
Reviewers: dhruba, haobo, kailiu
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14715
Summary: I realized that "D14409 Avoid sorting in Version::Get() by presorting them in VersionSet::Builder::SaveTo()" is not done in an optimized place. SaveTo() is usually inside mutex. Move it to Finalize(), which is called out of mutex.
Test Plan: make all check
Reviewers: dhruba, haobo, kailiu
Reviewed By: dhruba
CC: igor, leveldb
Differential Revision: https://reviews.facebook.net/D14607
Summary: It seems to be a decision tradeoff in current codes: we make a malloc for every Get() to reduce one malloc for a flush inside mutex. It takes about 5% of CPU time in readrandom tests. We might consider the tradeoff to be the other way around.
Test Plan: make all check
Reviewers: dhruba, haobo, igor
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14697
Summary:
I realized that manifest will get deleted by PurgeObsoleteFiles in DBImpl, but it is sill cleaner to delete
files before we restore the backup
Test Plan: backupable_db_test
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14619
Summary: @MarkCallaghan's tests indicate that performance with 8k rows in memtable is much worse than empty memtable. I wanted to add a regression tests that measures this effect, so we could optimize it. However, current config shows 634461 QPS on my devbox. Mark, any idea why this is so much faster than your measurements?
Test Plan: Ran the regression test.
Reviewers: MarkCallaghan, dhruba, haobo
Reviewed By: MarkCallaghan
CC: leveldb, MarkCallaghan
Differential Revision: https://reviews.facebook.net/D14511
Summary: In get operations, merge_operands is only used in few cases. Lazily initialize it can reduce average latency in some cases
Test Plan: make all check
Reviewers: haobo, kailiu, dhruba
Reviewed By: haobo
CC: igor, nkg-, leveldb
Differential Revision: https://reviews.facebook.net/D14415
Conflicts:
db/db_impl.cc
db/memtable.cc
Summary: Pre-sort files in VersionSet::Builder::SaveTo() so that when getting the value, no need to sort them. It can avoid the costs of vector operations and sorting in Version::Get().
Test Plan: make all check
Reviewers: haobo, kailiu, dhruba
Reviewed By: dhruba
CC: nkg-, igor, leveldb
Differential Revision: https://reviews.facebook.net/D14409
Summary:
creating new iterators of mem tables can be expensive. Move them out of mutex.
DBImpl::WriteLevel0Table()'s mems seems to be a local vector and is only used by flushing. memtables to flush are also immutable, so it should be safe to do so.
Test Plan: make all check
Reviewers: haobo, dhruba, kailiu
Reviewed By: dhruba
CC: igor, leveldb
Differential Revision: https://reviews.facebook.net/D14577
Conflicts:
db/db_impl.cc
Summary:
I have ran a get benchmark where all the data is in the cache and observed that most of the time is spent on waiting for lock in LRUCache.
This is an effort to optimize LRUCache.
Test Plan:
The data was loaded with fillseq. Then, I ran a benchmark:
/db_bench --db=/tmp/rocksdb_stat_bench --num=1000000 --benchmarks=readrandom --statistics=1 --use_existing_db=1 --threads=16 --disable_seek_compaction=1 --cache_size=20000000000 --cache_numshardbits=8 --table_cache_numshardbits=8
I ran the benchmark three times. Here are the results:
AFTER THE PATCH: 798072, 803998, 811807
BEFORE THE PATCH: 782008, 815593, 763017
Reviewers: dhruba, haobo, kailiu
Reviewed By: haobo
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14571
Summary: We now delete backups with newer sequence number, so the clients don't have to handle confusing situations when they restore from backup.
Test Plan: added a unit test
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14547
Summary: So fflush() takes a lock which is heavyweight. I added flush_pending_, but more importantly, I removed LogFlush() from foreground threads.
Test Plan: ./db_test
Reviewers: dhruba, haobo
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14535
Summary: Valgrind complained about BackupableDB. This fixes valgrind errors. Also, I cleaned up some code.
Test Plan: valgrind does not complain anymore
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14529
Summary:
In this diff I present you BackupableDB v1. You can easily use it to backup your DB and it will do incremental snapshots for you.
Let's first describe how you would use BackupableDB. It's inheriting StackableDB interface so you can easily construct it with your DB object -- it will add a method RollTheSnapshot() to the DB object. When you call RollTheSnapshot(), current snapshot of the DB will be stored in the backup dir. To restore, you can just call RestoreDBFromBackup() on a BackupableDB (which is a static method) and it will restore all files from the backup dir. In the next version, it will even support automatic backuping every X minutes.
There are multiple things you can configure:
1. backup_env and db_env can be different, which is awesome because then you can easily backup to HDFS or wherever you feel like.
2. sync - if true, it *guarantees* backup consistency on machine reboot
3. number of snapshots to keep - this will keep last N snapshots around if you want, for some reason, be able to restore from an earlier snapshot. All the backuping is done in incremental fashion - if we already have 00010.sst, we will not copy it again. *IMPORTANT* -- This is based on assumption that 00010.sst never changes - two files named 00010.sst from the same DB will always be exactly the same. Is this true? I always copy manifest, current and log files.
4. You can decide if you want to flush the memtables before you backup, or you're fine with backing up the log files -- either way, you get a complete and consistent view of the database at a time of backup.
5. More things you can find in BackupableDBOptions
Here is the directory structure I use:
backup_dir/CURRENT_SNAPSHOT - just 4 bytes holding the latest snapshot
0, 1, 2, ... - files containing serialized version of each snapshot - containing a list of files
files/*.sst - sst files shared between snapshots - if one snapshot references 00010.sst and another one needs to backup it from the DB, it will just reference the same file
files/ 0/, 1/, 2/, ... - snapshot directories containing private snapshot files - current, manifest and log files
All the files are ref counted and deleted immediatelly when they get out of scope.
Some other stuff in this diff:
1. Added GetEnv() method to the DB. Discussed with @haobo and we agreed that it seems right thing to do.
2. Fixed StackableDB interface. The way it was set up before, I was not able to implement BackupableDB.
Test Plan:
I have a unittest, but please don't look at this yet. I just hacked it up to help me with debugging. I will write a lot of good tests and update the diff.
Also, `make asan_check`
Reviewers: dhruba, haobo, emayanke
Reviewed By: dhruba
CC: leveldb, haobo
Differential Revision: https://reviews.facebook.net/D14295
Branch detection did not work in Jenkins. I realized that it set
GIT_BRANCH env variable to point to the current branch, so let's try
using this for branch detection.
Summary:
This will help me a lot! When we hit an assertion in unittest, we get the whole stack trace now.
Also, changed stack trace a bit, we now include actual demangled C++ class::function symbols!
Test Plan: Added ASSERT_TRUE(false) to a test, observed a stack trace
Reviewers: haobo, dhruba, kailiu
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14499
Summary: When running regression tests on other branches, this will push values to entity rocksdb_build.$git_branch
Test Plan: Ran regression test on regression branch, observed values send to ODS in entity rocksdb_build.regression
Reviewers: kailiu
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14493
Summary: Now DBWithTTL takes DB* and can behave more like StackableDB. This saves us a lot of duplicate work by defining interfaces
Test Plan: ttl_test with ASAN - OK
Reviewers: emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14481
Summary: As title
Test Plan: make clean and make
Reviewers: igor
Reviewed By: igor
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14469
Summary: This would enable rocksdb users to get the db identity without depending on implementation details(storing that in IDENTITY file)
Test Plan: db/db_test (has identity checks)
Reviewers: dhruba, haobo, igor, kailiu
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14463
Summary:
This adds 2 options for compression to db_bench:
* universal_compression_size_percent
* compression_level - to set zlib compression level
It also logs compression_size_percent at startup in LOG
Task ID: #
Blame Rev:
Test Plan:
make check, run db_bench
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14439
On Debian/testing and RHEL6 builds would fail due to undefined references to
google::FlagRegisterer::FlagRegisterer. It would seem that -lgflags was
missing from the build script.