Summary:
DBBlockCacheTest.TestWithCompressedBlockCache is depending on compression using snappy, so this test fail when snappy is not available
block this test when we don't have snappy
https://ci-builds.fb.com/view/rocksdb/job/rocksdb_no_compression/833/console
Test Plan: run the test when compression libraries are not avaliable
Reviewers: sdong, yiwu
Reviewed By: yiwu
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55413
Summary:
Using gcc 5.2.1 to compile RocksDB on Ubuntu 15.10 results in a warning about unused variable. Warning is treated as an error and all of it results in a build break. Fix the issue and make sure that everything compiles with that particular configuration.
P.S. Lint complains about a non-ASCII character used in the source code ("Wagner-Fischer"). Fix this as well.
Test Plan:
# Everything compiler cleanly on the system which exhibited the problem.
# `make clean && make -j 16` on CentOS.
# `make clean && USE_CLANG=1 make -j 16` on CentOS.
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55503
Summary: Set of updates to the subcompaction benchmark script which are based on our internal discussions. The intent behind the changes is to make sure that the scripts will correctly reflect how we're doing the actual benchmarking.
Test Plan: Tested by exercising the full set of compaction benchmarks and validating the execution and consistency of results.
Reviewers: MarkCallaghan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55461
Summary:
The pre-existing code is trying to clamp between 65,536 and 0,
resulting in clamping to 65,536, resulting in very small buffers,
resulting in ShouldFlushNow() being true quite easily,
resulting in assertion failing and database performance
being "not what it should be".
https://github.com/facebook/rocksdb/issues/1018
Test Plan: make check
Reviewers: sdong, andrewkr, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55455
Summary:
In block based table reader, wow we put index reader to block cache, which can be retrieved after DB restart. However, index reader may reference internal comparator, which can be destroyed after DB restarts, causing problems.
Fix it by making cache key identical per table reader.
Test Plan: Add a new test which failed with out the commit but now pass.
Reviewers: IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: maro, yhchiang, kradhakrishnan, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55287
Summary: This will make it easier for admins and devs to use RepairDB.
Test Plan:
Tried deleting the manifest and verified it recovers:
$ ldb --create_if_missing --db=/tmp/test_db put ok ok
$ rm -f /tmp/test_db/MANIFEST-000001
$ ./ldb --db=/tmp/test_db repair
$ ldb --db=/tmp/test_db get ok
ok
Reviewers: yhchiang, sdong, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55359
Summary: Refactored db_bench transaction stress tests so that they can be called from unit tests as well.
Test Plan: run new unit test as well as db_bench
Reviewers: yhchiang, IslamAbdelRahman, sdong
Reviewed By: IslamAbdelRahman
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55203
Summary:
Fix for https://github.com/facebook/mysql-5.6/issues/174
When there is no old files to purge, vector.at(i) function was crashing
if (old_info_log_file_count != 0 &&
old_info_log_file_count >= db_options_.keep_log_file_num) {
std::sort(old_info_log_files.begin(), old_info_log_files.end());
size_t end = old_info_log_file_count - db_options_.keep_log_file_num;
for (unsigned int i = 0; i <= end; i++) {
std::string& to_delete = old_info_log_files.at(i);
Added check to old_info_log_file_count be non zero.
Test Plan: run existing tests
Reviewers: gunnarku, vasilep, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: andrewkr, webscalesql-eng, dhruba
Differential Revision: https://reviews.facebook.net/D55245
Summary:
- Keep track of obsolete manifests in VersionSet
- Updated FindObsoleteFiles() to put obsolete manifests in the JobContext for later use by PurgeObsoleteFiles()
- Added test case that verifies a stale manifest is deleted by a non-full purge
Test Plan:
$ ./backupable_db_test --gtest_filter=BackupableDBTest.ChangeManifestDuringBackupCreation
Reviewers: IslamAbdelRahman, yoshinorim, sdong
Reviewed By: sdong
Subscribers: andrewkr, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D55269
Summary:
Cache to have an option to fail Cache::Insert() when full. Update call sites to check status and handle error.
I totally have no idea what's correct behavior of all the call sites when they encounter error. Please let me know if you see something wrong or more unit test is needed.
Test Plan: make check -j32, see tests pass.
Reviewers: anthony, yhchiang, andrewkr, IslamAbdelRahman, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D54705
Summary:
This is a follow-up patch of https://reviews.facebook.net/D54891.
As the information about files being compacted will also be used
when making compaction decision, it is necessary to update the compaction
score when a compaction plan has been made but not yet execute.
This patch adds a missing call to update the compaction score in
CompactFiles().
Test Plan: compact_files_test
Reviewers: sdong, IslamAbdelRahman, kradhakrishnan, yiwu, andrewkr
Reviewed By: andrewkr
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55227
Summary:
ReleaseCompactionFiles must be called when DB mutex is held,
but the documentation is mission.
Test Plan: no code change
Reviewers: anthony, IslamAbdelRahman, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D54987
Summary: Add db_test2 to .gitignore
Test Plan: make sure db_test2 dont show in "git status"
Reviewers: sdong, yhchiang, andrewkr, anthony
Reviewed By: anthony
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D55191
Summary: Previously, reusing a transaction (by passing it as an argument to BeginTransaction) would not clear the transaction's snapshot. This is not a clear, well-definited behavior.
Test Plan: improved test
Reviewers: sdong, IslamAbdelRahman, horuff, jkedgar
Reviewed By: jkedgar
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55053
Summary: I realized I again is wrong about the naming convention. Let me change it to the correct one.
Test Plan: Run unit tests.
Reviewers: IslamAbdelRahman, kradhakrishnan, yhchiang, andrewkr
Reviewed By: andrewkr
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D55041
Summary:
Use pure if-then check instead of assert in EraseColumnFamilyInfo
when the specified column family does not found in the cf_info_map_.
So the second deletion will be no op instead of crash.
Test Plan: existing test.
Reviewers: sdong, anthony, kradhakrishnan, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55023
Summary:
Currently, when rocksdb tries to run manual compaction to refit data into a level,
there's a ReFitLevel() process that requires no bg work is currently running.
When RocksDB plans to ReFitLevel(), it will do the following:
1. pause scheduling new bg work.
2. wait until all bg work finished
3. do the ReFitLevel()
4. unpause scheduling new bg work.
However, as it pause scheduling new bg work at step one and waiting for all bg work
finished in step 2, RocksDB will stop flushing until all bg work is done (which
could take a long time.)
This patch fix this issue by changing the way ReFitLevel() pause the background work:
1. pause scheduling compaction.
2. wait until all bg work finished.
3. pause scheduling flush
4. do ReFitLevel()
5. unpause both flush and compaction.
The major difference is that. We only pause scheduling compaction in step 1 and wait
for all bg work finished in step 2. This prevent flush being blocked for a long time.
Although there's a very rare case that ReFitLevel() might be in starvation in step 2,
but it's less likely the case as flush typically finish very fast.
Test Plan: existing test.
Reviewers: anthony, IslamAbdelRahman, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55029
Summary: After closely working with Mark, Siying, and Yueh-Hsuan this set of changes reflects the updates needed to measure RocksDB subcompaction performance in a correct manner. The essence of the benchmark is executing `fillrandom` followed by `compact` with the correct set of options for various number of subcompactions specified.
Test Plan: Tested internally to verify correctness and reliability.
Reviewers: sdong, yhchiang, MarkCallaghan
Reviewed By: MarkCallaghan
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D55089
Summary:
The call to
```
CaptureCurrentFileNumberInPendingOutputs()
```
should be before
```
versions_->NewFileNumber()
```
Right now we are not actually protecting the file from being deleted
Test Plan: make check
Reviewers: sdong, anthony, yhchiang
Reviewed By: yhchiang
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D54645
Summary:
Now that we get sizes efficiently, we no longer need the workaround to
embed file size in filename.
Test Plan:
$ ./backupable_db_test
Reviewers: sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55035
Summary: We always put compaction to level0_compactions_in_progress_ for universal compaction, so we should also remove it. The bug causes assert failure when running manual compaction.
Test Plan:
TEST_TMPDIR=/dev/shm/ ./db_bench --benchmarks=fillrandom,compact --subcompactions=16 --compaction_style=1
always fails on my host. After the fix, it doesn't fail any more.
Reviewers: IslamAbdelRahman, andrewkr, kradhakrishnan, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D55017
Summary: We want to provide a way to detect whether an iterator is stale and needs to be recreated. Add a iterator property to return version number.
Test Plan: Add two unit tests for it.
Reviewers: IslamAbdelRahman, yhchiang, anthony, kradhakrishnan, andrewkr
Reviewed By: andrewkr
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D54921
Summary: Now we skip to add boundary keys to subcompaction candidates since we see an empty level. This makes subcompaction almost disabled for universal compaction. We should consider all files instead.
Test Plan: Run existing tests.
Reviewers: IslamAbdelRahman, andrewkr, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D55005
Summary: There are a few options in struct DBOptions that aren't handled by options_helper.cc. Add those missing options so they can be used by GetDBOptionsFromString() and friends.
Test Plan: Updated options_test.cc, reran all tests.
Reviewers: sdong, yhchiang
Reviewed By: yhchiang
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D54603
Summary:
For VerifyBackup(), backup files can be spread across "shared/",
"shared_checksum/", and "private/" subdirectories, so we have to
bulk get all three.
For CreateNewBackup(), we make two separate bulk calls: one for the
data files and one for WAL files.
There is also a new helper function, ExtendPathnameToSizeBytes(),
that translates the file attributes vector to a map. I decided to leave
GetChildrenFileAttributes()'s (from D53781) return type as vector to
keep it consistent with GetChildren().
Depends on D53781.
Test Plan:
verified relevant unit tests
$ ./backupable_db_test
Reviewers: IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53919
Summary: BlockBasedTableOptions.format_version = 2 uses better encoding format. Now it's the time to make it default.
Test Plan: Run all existing tests.
Reviewers: igor, yhchiang, anthony, kradhakrishnan, andrewkr, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: MarkCallaghan, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D54879
Summary: 4.5 is already cut, we can now increase the version in 4.6.
Test Plan: Not needed.
Reviewers: anthony, kradhakrishnan, andrewkr, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D54963