Summary:
it doesn't take nullptr according to its declaration in glibc, and calling it in this way causes our sanitizers (ubsan, clang analyze) to fail.
Closes https://github.com/facebook/rocksdb/pull/2776
Differential Revision: D5683260
Pulled By: ajkr
fbshipit-source-id: 114b137ee188172f96eedc43139255cae7bee80a
Summary:
The goal is to reduce the number of histogram buckets, particularly now that we print these histograms for each column family. I chose 1.5 as the factor. We can adjust it later to either make buckets more granular or make fewer buckets.
Closes https://github.com/facebook/rocksdb/pull/2139
Differential Revision: D4872076
Pulled By: ajkr
fbshipit-source-id: 87790d782a605506c3d24190a028cecbd7aa564a
Summary:
Changes:
* checks if ASAN mode is on, and uses malloc and free in the constructor and destructor
Closes https://github.com/facebook/rocksdb/pull/2767
Differential Revision: D5671243
Pulled By: armishra
fbshipit-source-id: 8e4ad0f7f163400c4effa8617d3b30134119d802
Summary:
If GC kicks in between
* A Get() reads index entry from base db.
* The Get() read from a blob file
The GC can delete the corresponding blob file, making the key not found. Fortunately we have existing logic to avoid deleting a blob file if it is referenced by a snapshot. So the fix is to explicitly create a snapshot before reading index entry from base db.
Closes https://github.com/facebook/rocksdb/pull/2754
Differential Revision: D5655956
Pulled By: yiwu-arbug
fbshipit-source-id: e4ccbc51331362542e7343175bbcbdea5830f544
Summary:
When out of space, blob db should GC the oldest file. The current implementation GC the newest one instead. Fixing it.
Closes https://github.com/facebook/rocksdb/pull/2757
Differential Revision: D5657611
Pulled By: yiwu-arbug
fbshipit-source-id: 56c30a4c52e6ab04551dda8c5c46006d4070b28d
Summary:
add this counter stat to track usage of deletion-dropping optimization. if usage is low, we can delete it to prevent bugs like #2726.
Closes https://github.com/facebook/rocksdb/pull/2761
Differential Revision: D5665421
Pulled By: ajkr
fbshipit-source-id: 881befa2d199838dac88709e7b376a43d304e3d4
Summary:
it's a new feature that'll be released in 5.8, introduced by PR #2498.
Closes https://github.com/facebook/rocksdb/pull/2759
Differential Revision: D5661923
Pulled By: ajkr
fbshipit-source-id: 9ba9f0d146c453715358ef2dd298aa7765649d7c
Summary:
With this PR, we can measure read-amp for queries where perf_context is enabled as follows:
```
SetPerfLevel(kEnableCount);
Get(1, "foo");
double read_amp = static_cast<double>(get_perf_context()->block_read_byte / get_perf_context()->get_read_bytes);
SetPerfLevel(kDisable);
```
Our internal infra enables perf_context for a sampling of queries. So we'll be able to compute the read-amp for the sample set, which can give us a good estimate of read-amp.
Closes https://github.com/facebook/rocksdb/pull/2749
Differential Revision: D5647240
Pulled By: ajkr
fbshipit-source-id: ad73550b06990cf040cc4528fa885360f308ec12
Summary:
This fixes the existing logic for pinning l0 index partitions. The patch preloads the partitions into block cache and pin them if they belong to level 0 and pin_l0 is set.
The drawback is that it does many small IOs when preloading all the partitions into the cache is direct io is enabled. Working for a solution for that.
Closes https://github.com/facebook/rocksdb/pull/2661
Differential Revision: D5554010
Pulled By: maysamyabandeh
fbshipit-source-id: 1e6f32a3524d71355c77d4138516dcfb601ca7b2
Summary:
Changes:
* extended the wait_txn_map to track additional information
* designed circular buffer to store n latest deadlocks' information
* added test coverage to verify the additional information tracked is accurately stored in the buffer
Closes https://github.com/facebook/rocksdb/pull/2630
Differential Revision: D5478025
Pulled By: armishra
fbshipit-source-id: 2b138de7b5a73f5ca554fc3ff8220a3be49f39e7
Summary:
Clang complain about an cast from uint64_t to int32 in db_stress. Fixing it.
Closes https://github.com/facebook/rocksdb/pull/2755
Differential Revision: D5655947
Pulled By: yiwu-arbug
fbshipit-source-id: cfac10e796e0adfef4727090b50975b0d6e2c9be
Summary:
On initial call to BlobDBImpl::WaStats() `all_periods_write_` would be empty, so it will crash when we call pop_front() at line 1627. Apparently it is mean to pop only when `all_periods_write_.size() > kWriteAmplificationStatsPeriods`.
The whole write amp calculation doesn't seems to be correct and it is not being exposed. Will work on it later.
Test Plan
Change kWriteAmplificationStatsPeriodMillisecs to 1000 (1 second) and run db_bench --use_blob_db for 5 minutes.
Closes https://github.com/facebook/rocksdb/pull/2751
Differential Revision: D5648269
Pulled By: yiwu-arbug
fbshipit-source-id: b843d9a09bb5f9e1b713d101ec7b87e54b5115a4
Summary:
Updating Cassandra merge operator to make use of a single merge operand when needed. Single merge operand support has been introduced in #2721.
Closes https://github.com/facebook/rocksdb/pull/2753
Differential Revision: D5652867
Pulled By: sagar0
fbshipit-source-id: b9fbd3196d3ebd0b752626dbf9bec9aa53e3e26a
Summary:
Added a function `MergeOperator::DoesAllowSingleMergeOperand()` to allow invoking a merge operator even with a single merge operand, if overriden.
This is needed for Cassandra-on-RocksDB work. All Cassandra writes are through merges and this will allow a single merge-value to be updated in the merge-operator invoked via a compaction, if needed, due to an expired TTL.
Closes https://github.com/facebook/rocksdb/pull/2721
Differential Revision: D5608706
Pulled By: sagar0
fbshipit-source-id: f299f9f91c4d1ac26e48bd5906e122c1c5e5f3fc
Summary:
fix some things that made this command hard to use from CLI:
- use default values for `target_file_size_base` and `max_bytes_for_level_base`. previously we were using small values for these but default value of `write_buffer_size`, which led to enormous number of L1 files.
- failure message for `value_size_mult` too big. previously there was just an assert, so in non-debug mode it'd overrun the value buffer and crash mysteriously.
- only print verification success if there's no failure. before it'd print both in the failure case.
- support `memtable_prefix_bloom_size_ratio`
- support `num_bottom_pri_threads` (universal compaction)
Closes https://github.com/facebook/rocksdb/pull/2741
Differential Revision: D5629495
Pulled By: ajkr
fbshipit-source-id: ddad97d6d4ba0884e7c0f933b0a359712514fc1d
Summary:
the range delete tombstones in memtable should be added to the aggregator even when the memtable's prefix bloom filter tells us the lookup key's not there. This bug could cause data to temporarily reappear until the memtable containing range deletions is flushed.
Reported in #2743.
Closes https://github.com/facebook/rocksdb/pull/2745
Differential Revision: D5639007
Pulled By: ajkr
fbshipit-source-id: 04fc6facb6f978340a3f639536f4ca7c0d73dfc9
Summary:
We forgot to recompute compaction scores after picking a universal compaction like we do in level compaction (a34b2e388e/db/compaction_picker.cc (L691-L695)). This leads to a fairness issue where we waste compactions on CFs/DB instances that don't need it while others can starve.
Previously, ccecf3f4fb fixed the issue for the read-amp-based compaction case; this PR avoids the issue earlier and also for size-ratio-based compactions.
Closes https://github.com/facebook/rocksdb/pull/2688
Differential Revision: D5566191
Pulled By: ajkr
fbshipit-source-id: 010bccb2a107f6a76f3d3022b90aadce5cc48feb
Summary:
Implement the main body of WritePrepared pseudo code. This includes PrepareInternal and CommitInternal, as well as AddCommitted which updates the commit map. It also provides a IsInSnapshot method that could be later called form the read path to decide if a version is in the read snapshot or it should other be skipped.
This patch lacks unit tests and does not attempt to offer an efficient implementation. The idea is that to have the API specified so that we can work on related tasks in parallel.
Closes https://github.com/facebook/rocksdb/pull/2713
Differential Revision: D5640021
Pulled By: maysamyabandeh
fbshipit-source-id: bfa7a05e8d8498811fab714ce4b9c21530514e1c
Summary:
`PartialMergeMulti` implementation is enough for Cassandra, and `PartialMerge` is not required. Implementing both will just duplicate the code.
As per https://github.com/facebook/rocksdb/blob/master/include/rocksdb/merge_operator.h#L130-L135 :
```
// The default implementation of PartialMergeMulti will use this function
// as a helper, for backward compatibility. Any successor class of
// MergeOperator should either implement PartialMerge or PartialMergeMulti,
// although implementing PartialMergeMulti is suggested as it is in general
// more effective to merge multiple operands at a time instead of two
// operands at a time.
```
Closes https://github.com/facebook/rocksdb/pull/2737
Reviewed By: scv119
Differential Revision: D5633073
Pulled By: sagar0
fbshipit-source-id: ef4fa102c22fec6a0175ed12f5c44c15afe3c8ca
Summary:
Similar to the bug fixed by https://github.com/facebook/rocksdb/pull/2726, FIFO with compaction and kCompactionStyleNone during user customized CompactFiles() with output level to be 0 can suffer from the same problem. Fix it by leveraging the bottommost_level_ flag.
Closes https://github.com/facebook/rocksdb/pull/2735
Differential Revision: D5626906
Pulled By: siying
fbshipit-source-id: 2b148d0461c61dbd986d74655e384419ae442158
Summary:
Support a window of `active_width` keys that rolls through `[0, max_key)` over the duration of the test. Operations only affect keys inside the window. This gives us the ability to detect L0->L0 deletion bug (#2722).
Closes https://github.com/facebook/rocksdb/pull/2739
Differential Revision: D5628555
Pulled By: ajkr
fbshipit-source-id: 9cb2d8f4ab1a7c73f7797b8e19f7094970ea8749
Summary:
Most of the data used here in shell commands is not generated directly from user input but some data (ie: from environment variables) may have been external influenced. It is a good practice to escape this data before using it in a shell command.
Originally D4800264 but we never quite got it merged.
Reviewed By: yiwu-arbug
Differential Revision: D5595052
fbshipit-source-id: c09d8b47fe35fc6a47afb4933ccad9d56ca8d7be
Summary:
Some compilers require `-std=c++11` for the `cstdint` header to be available. We already have logic to add `-std=c++11` to `CXXFLAGS` when the compiler is not MSVC; simply reorder CMakeLists.txt so that logic happens before the calls to `CHECK_CXX_SOURCE_COMPILES`.
Additionally add a missing `set(CMAKE_REQUIRED_FLAGS, ...)` before a call to `CHECK_C_SOURCE_COMPILES`.
Closes https://github.com/facebook/rocksdb/pull/2535
Differential Revision: D5384244
Pulled By: yiwu-arbug
fbshipit-source-id: 2dbae4297c5d8ab4636e08b1457ffb2d3e37aef4
Summary:
With some compilers, `-std=c++11` is necessary for <cstdint> to be
available. Pass this flag via $PLATFORM_CXXFLAGS. Fixes#2488.
Closes https://github.com/facebook/rocksdb/pull/2545
Differential Revision: D5620610
Pulled By: yiwu-arbug
fbshipit-source-id: 2f975b8c1ad52e283e677d9a33543abd064f13ce
Summary:
`KeyNotExistsBeyondOutputLevel` didn't consider L0 files' key-ranges. So if a key only was covered by older L0 files' key-ranges, we would incorrectly drop deletions of that key. This PR just skips the deletion-dropping optimization when output level is L0.
Closes https://github.com/facebook/rocksdb/pull/2726
Differential Revision: D5617286
Pulled By: ajkr
fbshipit-source-id: 4bff1396b06d49a828ba4542f249191052915bce
Summary:
- like other subcommands, reporting compression sizes should be specified with the `--command` CLI arg.
- also added `--compression_types` arg as it's useful to restrict the types of compression used, at least in my dictionary compression experiments.
Closes https://github.com/facebook/rocksdb/pull/2706
Differential Revision: D5589520
Pulled By: ajkr
fbshipit-source-id: 305bb4ebcc95eecc8a85523cd3b1050619c9ddc5
Summary:
Previously we could only select the CF on which to operate uniformly at random. This is a limitation, e.g., when testing universal compaction as all CFs would need to run full compaction at roughly the same time, which isn't realistic.
This PR allows the user to specify the probability distribution for selecting CFs via the `--column_family_distribution` argument.
Closes https://github.com/facebook/rocksdb/pull/2677
Differential Revision: D5544436
Pulled By: ajkr
fbshipit-source-id: 478d56260995236ae90895ce5bd51f38882e185a
Summary:
sounds like we're willing to tradeoff minor inaccuracy in stats for speed. start with histogram stats. ticker stats will be harder (and, IMO, we shouldn't change them in this manner) as many test cases rely on them being exactly correct.
Closes https://github.com/facebook/rocksdb/pull/2720
Differential Revision: D5607884
Pulled By: ajkr
fbshipit-source-id: 1b754cda35ea6b252d1fdd5aa3cfb58866506372
Summary:
While GC, blob DB use optimistic transaction to delete or replace the index entry in LSM, to guarantee correctness if there's a normal write writing to the same key. However, the previous implementation doesn't call SetSnapshot() nor use GetForUpdate() of transaction API, instead it do its own sequence number checking before beginning the transaction. A normal write can sneak in after the sequence number check and overwrite the key, and the GC will delete or relocate the old version of the key by mistake. Update the code to property use GetForUpdate() to check the existing index entry.
After the patch the sequence number store with each blob record is useless, So I'm considering remove the sequence number from blob record, in another patch.
Closes https://github.com/facebook/rocksdb/pull/2703
Differential Revision: D5589178
Pulled By: yiwu-arbug
fbshipit-source-id: 8dc960cd5f4e61b36024ba7c32d05584ce149c24
Summary:
It would be super helpful to not have to recompile rocksdb to get this performance tweak for mechanical disks.
I have signed the CLA.
Closes https://github.com/facebook/rocksdb/pull/2718
Differential Revision: D5606994
Pulled By: yiwu-arbug
fbshipit-source-id: c05e92bad0d03bd38211af1e1ced0d0d1e02f634
Summary:
Right now, if direct I/O is enabled, prefetching the last 512KB cannot be applied, except compaction inputs or readahead is enabled for iterators. This can create a lot of I/O for HDD cases. To solve the problem, the 512KB is prefetched in block based table if direct I/O is enabled. The prefetched buffer is passed in totegher with random access file reader, so that we try to read from the buffer before reading from the file. This can be extended in the future to support flexible user iterator readahead too.
Closes https://github.com/facebook/rocksdb/pull/2708
Differential Revision: D5593091
Pulled By: siying
fbshipit-source-id: ee36ff6d8af11c312a2622272b21957a7b5c81e7
Summary:
TSAN shows error when we grab too many locks at the same time. In TSAN crash test, make one shard key cover 2^22 keys so that no many keys will be hold at the same time.
Closes https://github.com/facebook/rocksdb/pull/2719
Differential Revision: D5609035
Pulled By: siying
fbshipit-source-id: 930e5d63fff92dbc193dc154c4c615efbdf06c6a
Summary:
- moved the max call for numeric limits into paranthesis so that max wont be called as macro when including <Windows.h>
Closes https://github.com/facebook/rocksdb/pull/2709
Differential Revision: D5600773
Pulled By: yiwu-arbug
fbshipit-source-id: fd28b6f7c10ddce21bad4030f2db06f965bb08da