Summary:
This diff allows compaction to reclaim storage more effectively.
In the current design, compactions are mainly triggered based on
the file sizes. However, since deletion entries does not have
value, files which have many deletion entries are less likely
to be compacted. As a result, it may took a while to make
deletion entries to be compacted.
This diff address issue by compensating the size of deletion
entries during compaction process: the size of each deletion
entry in the compaction process is augmented by 2x average
value size. The diff applies to both leveled and universal
compacitons.
Test Plan:
develop CompactionDeletionTrigger
make db_test
./db_test
Reviewers: haobo, igor, ljin, sdong
Reviewed By: sdong
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19029
Summary:
RandomGenerator::Generate() currently has an assertion len < data_.size().
However, it is actually fine to have len == data_.size().
This diff change the assertion to len <= data_.size().
Test Plan:
make db_bench
./db_bench
Reviewers: haobo, sdong, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19269
Summary:
Enable the random values in Java DB Bench to be generated based
on the compression_ratio specified in the command-line arguments.
Test Plan:
make rocksdbjava
java/jdb_bench.sh
Reviewers: sdong, ankgup87, haobo
Reviewed By: haobo
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19101
Summary:
Revert the default setting of InitFromCmdLineArgs() as all the callers
currently provide full set of arguments.
Test Plan:
make reduce_levels_test
./reduce_levels_test
Reviewers: haobo, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19257
Summary:
Fixed the following compile error.
tools/reduce_levels_test.cc:89:31: error: no matching function for call to 'InitFromCmdLineArgs'
LDBCommand* level_reducer = LDBCommand::InitFromCmdLineArgs(args);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./util/ldb_cmd.h:56:22: note: candidate function not viable: requires 3 arguments, but 1 was provided
static LDBCommand* InitFromCmdLineArgs(
^
./util/ldb_cmd.h:62:22: note: candidate function not viable: requires 4 arguments, but 1 was provided
static LDBCommand* InitFromCmdLineArgs(
^
1 error generated.
Test Plan:
make reduce_levels_test
./reduce_levels_test
Reviewers: haobo, sdong, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19251
Summary:
Fixed the following compile error.
tools/reduce_levels_test.cc:89:31: error: no matching function for call to 'InitFromCmdLineArgs'
LDBCommand* level_reducer = LDBCommand::InitFromCmdLineArgs(args);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./util/ldb_cmd.h:56:22: note: candidate function not viable: requires 3 arguments, but 1 was provided
static LDBCommand* InitFromCmdLineArgs(
^
./util/ldb_cmd.h:62:22: note: candidate function not viable: requires 4 arguments, but 1 was provided
static LDBCommand* InitFromCmdLineArgs(
^
1 error generated.
Test Plan:
make reduce_levels_test
./reduce_levels_test
Reviewers: haobo, ljin, sdong
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19251
Summary:
This diff fixes the following compilation error in mac.
./third-party/rapidjson/reader.h:422:31: error: comparison of constant 256 with expression of type 'Ch' (aka 'char') is always true
[-Werror,-Wtautological-constant-out-of-range-compare]
if ((sizeof(Ch) == 1 || e < 256) && escape[(unsigned char)e])
~ ^ ~~~
1 error generated.
Test Plan: make db_test
Reviewers: haobo, sdong, igor, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19245
Summary: Currently ldb tool dump keys either in ascii format or hex format - neither is ideal if the key has a binary structure and is not readable in ascii. This diff also allows LDB tool to be customized in ways beyond DB options.
Test Plan: verify that key formatter works with some simple db with binary key.
Reviewers: sdong, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19209
Summary: as title
Test Plan: make release
Reviewers: haobo, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19227
Summary:
Add one entry to HISTORY.md that is failed to be added.
Move two other ones to the right location.
Test Plan: Only document
Reviewers: haobo, ljin
Subscribers: dhruba, yhchiang, leveldb
Differential Revision: https://reviews.facebook.net/D19233
Summary: as requested by mark
Test Plan: make release
Reviewers: sdong, haobo
Reviewed By: haobo
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19221
Summary:
Add HISTORY-JAVA.md and include both C++ and Java history files in .jar file
that describes important chagnes of RocksJava.
Test Plan: make rocksdbjava and make sure HISTORY.md is inside the .jar file
Reviewers: haobo, sdong, ankgup87
Reviewed By: ankgup87
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19185
Summary:
Improve the Java Doc of RocksObject, which explains more about the life-cycle
of a native handle.
Test Plan: no code change
Reviewers: haobo, sdong, ankgup87
Reviewed By: ankgup87
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19179
Summary:
After evaluating options for JSON storage, I decided to implement our own. The reason is that we'll be able to optimize it better and we get to reduce unnecessary dependencies (which is what we'd get with folly).
I also plan to write a serializer/deserializer for JSONDocument with our own binary format similar to BSON. That way we'll store binary JSON format in RocksDB instead of the plain-text JSON. This means less storage and faster deserialization.
There are still some inefficiencies left here. I plan to optimize them after we develop a functioning DocumentDB. That way we can move and iterate faster.
Test Plan: added a unit test
Reviewers: dhruba, haobo, sdong, ljin, yhchiang
Reviewed By: haobo
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D18831
Summary:
As discussed in our internal group, we don't get much use of seek compaction at the moment, while it's making code more complicated and slower in some cases.
This diff removes seek compaction and (hopefully) all code that was introduced to support seek compaction.
There is one test case that relied on didIO information. I'll try to find another way to implement it.
Test Plan: make check
Reviewers: sdong, haobo, yhchiang, ljin, dhruba
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19161
Summary:
We decided that one of the long term goals is to unify level and universal compaction.
As a small first step, I'm unifying level 0 sorting methods.
Previously, we used to sort level 0 files in level compaction by file number and in universal compaction by sequence number.
But it turns out that in level compaction, sorting by file number is exactly the same as sorting by sequence number.
Test Plan:
Ran make check with bunch of asserts to verify the sorting order is exactly the same.
Also, make check with this patch
Reviewers: haobo, yhchiang, ljin, dhruba, sdong
Reviewed By: sdong
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19131
Summary: Currently, RocksDB returns error if a db written with prefix hash index, is later opened without providing a prefix extractor. This is uncessarily harsh. Without a prefix extractor, we could always fallback to the normal binary index.
Test Plan: unit test, also manually veried LOG that fallback did occur.
Reviewers: sdong, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19191
Summary:
Currently, when something badly happen in the DB::Write() while the write-queue
contains more than one element, the current design seems to forget to clean up
the queue as well as wake-up all the writers, this potentially makes rocksdb
hang on writes.
Test Plan: make all check
Reviewers: sdong, ljin, igor, haobo
Reviewed By: haobo
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19167
Summary: asan_crash_test is failing on segfault
Test Plan: running asan_crash_test
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19149
Summary: fix a bug in D19047, which caused DBTest.RecoverDuringMemtableCompaction to fail.
Test Plan: unit test
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19155
Summary:
Add a encoding feature of PlainTable to encode PlainTable's keys to save some bytes for the same prefixes.
The data format is documented in table/plain_table_factory.h
Test Plan: Add unit test coverage in plain_table_db_test
Reviewers: yhchiang, igor, dhruba, ljin, haobo
Reviewed By: haobo
Subscribers: nkg-, leveldb
Differential Revision: https://reviews.facebook.net/D18735
Summary:
Currently, the in-memory hash index of blockbased table uses a precise hash map to track the prefix to block range mapping. In some use cases, especially when prefix itself is big, the memory overhead becomes a problem. This diff introduces a fixed hash bucket array that does not store the prefix and allows prefix collision, which is similar to the plaintable hash index, in order to reduce the memory consumption.
Just a quick draft, still testing and refining.
Test Plan: unit test and shadow testing
Reviewers: dhruba, kailiu, sdong
Reviewed By: sdong
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19047
Summary:
This is minor, but if we put the writing talbe factory as the third parameter, when we add a new table format, we'll have a situation:
1) block based factory
2) plain table factory
3) output factory
4) new format factory
I think it makes more sense to have output as the first parameter.
Also, fixed a NewAdaptiveTableFactory() call in unit test
Test Plan: unit test
Reviewers: sdong
Reviewed By: sdong
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19119
Summary: Add two parameters of hash linked list to log distribution of number of entries across all buckets, and a sample row when there are too many entries in one single bucket.
Test Plan: Turn it on in plain_table_db_test and see the logs.
Reviewers: haobo, ljin
Reviewed By: ljin
Subscribers: leveldb, nkg-, dhruba, yhchiang
Differential Revision: https://reviews.facebook.net/D19095
Summary: The new table factory is used if users want to convert a DB from one table format to the other. A user can use this table to open a DB written using one table format and write new files to another table format.
Test Plan: add a unit test
Reviewers: haobo, igor
Reviewed By: igor
Subscribers: dhruba, ljin, yhchiang, leveldb
Differential Revision: https://reviews.facebook.net/D19017
Summary: Include max_write_buffer_number >= 2 to SanitizeOptions.
Test Plan: make all check
Reviewers: haobo, sdong, igor, ljin
Reviewed By: ljin
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19077
Summary:
We added multiple fields to FileMetaData recently and are planning to add more.
This refactoring separate the minimum information for accessing the file. This object is copyable (FileMetaData is not copyable since the ref counter). I hope this refactoring can enable further improvements:
(1) use it to design a more efficient data structure to speed up read queries.
(2) in the future, when we add information of storage level, we can easily do the encoding, instead of enlarge this structure, which might expand memory work set for file meta data.
The definition is same as current EncodedFileMetaData used in two level iterator, so now the logic in two level iterator is easier to understand.
Test Plan: make all check
Reviewers: haobo, igor, ljin
Reviewed By: ljin
Subscribers: leveldb, dhruba, yhchiang
Differential Revision: https://reviews.facebook.net/D18933
Some platforms, particularly Windows, do not have a single method that can
release both a held reader lock and a held writer lock; instead, a
separate method (ReleaseSRWLockShared or ReleaseSRWLockExclusive) must be
called in each case.
This may also be necessary to back MutexRW with a shared_mutex in C++14;
the current language proposal includes both an unlock() and a
shared_unlock() method.
Summary:
Fix a bug causing LOG is not created when max_log_file_size is set.
This bug is reported in issue #174.
Test Plan:
Add TEST(AutoRollLoggerTest, LogFileExistence).
make auto_roll_logger_test
./auto_roll_logger_test
Reviewers: haobo, sdong, ljin, igor, igor2
Reviewed By: igor2
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D19053
Summary:
https://reviews.facebook.net/D17205 removed the logic of skipping file key range check when there are less than 3 level 0 files. This patch brings it back.
Other than that, add another small optimization to avoid to check all the levels if most higher levels don't have any file.
Test Plan: make all check
Reviewers: ljin
Reviewed By: ljin
Subscribers: yhchiang, igor, haobo, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D19035
Summary: sst_dump now doesn't work well for PlainTable. Not sure when it started, but this should fix it.
Test Plan: Run sst_dump against a file that used to fail.
Reviewers: yhchiang, haobo, igor
Reviewed By: igor
Subscribers: dhruba, ljin, leveldb
Differential Revision: https://reviews.facebook.net/D19023
Summary: as title
Test Plan:
db_bench
the initial result is very promising. I will post results of complete
runs
Reviewers: dhruba, haobo, sdong, igor
Reviewed By: sdong
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D18867