Summary:
RocksDB supports a two-level cache hierarchy (see https://rocksdb.org/blog/2021/05/27/rocksdb-secondary-cache.html), where items evicted from the primary cache can be spilled over to the secondary cache, or items from the secondary cache can be promoted to the primary one. We have a CacheLib-based non-volatile secondary cache implementation that can be used to improve read latencies and reduce the amount of network bandwidth when using distributed file systems. In addition, we have recently implemented a compressed secondary cache that can be used as a replacement for the OS page cache when e.g. direct I/O is used. The goals of this task are to add support for using a secondary cache with the blob cache and to measure the potential performance gains using `db_bench`.
This task is a part of https://github.com/facebook/rocksdb/issues/10156
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10349
Reviewed By: ltamasi
Differential Revision: D37896773
Pulled By: gangliao
fbshipit-source-id: 7804619ce4a44b73d9e11ad606640f9385969c84
Summary:
This is a prototype of a partially lock-free version of ClockCache. Roughly speaking, reads are lock-free and writes are lock-based:
- Lookup is lock-free.
- Release is lock-free, unless (i) no references to the element are left and (ii) it was marked for deletion or ``erase_if_last_ref`` is set.
- Insert and Erase still use a per-shard lock.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10347
Test Plan:
- ``make -j24 check``
- ``make -j24 CRASH_TEST_EXT_ARGS="--duration=960 --cache_type=clock_cache --cache_size=1073741824 --block_size=16384" blackbox_crash_test_with_atomic_flush``
Reviewed By: pdillinger
Differential Revision: D37898776
Pulled By: guidotag
fbshipit-source-id: 6418fd980f786d69b871bf2fe959398e44cd3d80
Summary:
Using the Sequence number to time mapping to decide if a key is hot or not in
compaction and place it in the corresponding level.
Note: the feature is not complete, level compaction will run indefinitely until
all penultimate level data is cold and small enough to not trigger compaction.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10370
Test Plan:
CI
* Run basic db_bench for universal compaction manually
Reviewed By: siying
Differential Revision: D37892338
Pulled By: jay-zhuang
fbshipit-source-id: 792bbd91b1ccc2f62b5d14c53118434bcaac4bbe
Summary:
Sometimes we may not want to include extra computation in our cache_bench experiments. Here we add a flag to avoid any extra work. We also moved the timer start after the key generation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10363
Test Plan: Run cache_bench with and without the new flag and check that the appropriate code is being executed.
Reviewed By: pdillinger
Differential Revision: D37870416
Pulled By: guidotag
fbshipit-source-id: f853207b6643b9328e774251c3f679b1fd78a11a
Summary:
DB::PutEntity() is defined as 0, but it is actually implemented in db/db_impl/db_impl_write.cc. It is incorrect, and might cause problems when users implement class DB themselves.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10364
Test Plan: See existing tests pass
Reviewed By: riversand963
Differential Revision: D37874886
fbshipit-source-id: b81713ddb707720b52d57a15de56a59414c24f66
Summary:
Which will be used for tiered storage to preclude hot data from
compacting to the cold tier (the last level).
Internally, adding seqno to time mapping. A periodic_task is scheduled
to record the current_seqno -> current_time in certain cadence. When
memtable flush, the mapping informaiton is stored in sstable property.
During compaction, the mapping information are merged and get the
approximate time of sequence number, which is used to determine if a key
is recently inserted or not and preclude it from the last level if it's
recently inserted (within the `preclude_last_level_data_seconds`).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10338
Test Plan: CI
Reviewed By: siying
Differential Revision: D37810187
Pulled By: jay-zhuang
fbshipit-source-id: 6953be7a18a99de8b1cb3b162d712f79c2b4899f
Summary:
Some items are misplaced to 7.4 but they are unreleased.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10362
Reviewed By: jay-zhuang
Differential Revision: D37859426
fbshipit-source-id: e2ad099227309ed2e0f3ca450a9a43986d681c7c
Summary:
InternalKeyComparator is an internal class which is a simple wrapper of Comparator. https://github.com/facebook/rocksdb/pull/8336 made Comparator customizeable. As a side effect, internal key comparator was made configurable too. This introduces overhead to this simple wrapper. For example, every InternalKeyComparator will have an std::vector attached to it, which consumes memory and possible allocation overhead too.
We remove InternalKeyComparator from being customizable by making InternalKeyComparator not a subclass of Comparator.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10342
Test Plan: Run existing CI tests and make sure it doesn't fail
Reviewed By: riversand963
Differential Revision: D37771351
fbshipit-source-id: 917256ee04b2796ed82974549c734fb6c4d8ccee
Summary:
Support per_key_placement for last level compaction, which will
be used for tiered compaction.
* compaction iterator reports which level a key should output to;
* compaction get the output level information and check if it's safe to
output the data to penultimate level;
* all compaction output files will be installed.
* extra internal compaction stats added for penultimate level.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9964
Test Plan:
* Unittest
* db_bench, no significate difference: https://gist.github.com/jay-zhuang/3645f8fb97ec0ab47c10704bb39fd6e4
* microbench manual compaction no significate difference: https://gist.github.com/jay-zhuang/ba679b3e89e24992615ee9eef310e6dd
* run the db_stress multiple times (not covering the new feature) looks good (internal: https://fburl.com/sandcastle/9w84pp2m)
Reviewed By: ajkr
Differential Revision: D36249494
Pulled By: jay-zhuang
fbshipit-source-id: a96da57c8031c1df83e4a7a8567b657a112b80a3
Summary:
This complements https://github.com/facebook/rocksdb/issues/10351. This PR reverts NewClockCache's signature to an older version, expected by the users of the old (buggy) ClockCache.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10358
Test Plan: ``make -j24 check`` and re-run the pre-release tests.
Reviewed By: siying
Differential Revision: D37832601
Pulled By: guidotag
fbshipit-source-id: 32a91d3da4119be187935003b7b897272ceb1950
Summary:
Enabled zstd checksum flag in StreamingCompress so that WAL (de)compreression is protected by a checksum per compression frame.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10319
Test Plan:
- `make check`
- WAL perf: average ops/sec over 10 runs is 161226 pre PR and 159635 post PR (1% drop).
```
sudo TEST_TMPDIR=/dev/shm/memtable_write ./db_bench_checksum -benchmarks=fillseq -max_write_buffer_number=100 -num=1000000 -min_write_buffer_number_to_merge=10 -wal_compression=zstd
```
Reviewed By: ajkr
Differential Revision: D37673311
Pulled By: cbi42
fbshipit-source-id: 9f34a3bfc2a82e5c80b1ec63bb339a7465108ec9
Summary:
If the primary cache is LRU cache and there is a secondary cache, add Secondary Cache printable options into LRUCache::GetPrintableOptions.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10346
Test Plan:
1. Current Unit Tests should pass.
2. Use db_bench (with compressed_secondary_cache ) and the LOG should includes the new printable options from Seoncdary Cache.
Reviewed By: jay-zhuang
Differential Revision: D37779310
Pulled By: gitbw95
fbshipit-source-id: 88ce1f7df6b5f25740e598d9e7fa91e4c414cb8f
Summary:
ClockCache is still in experimental stage, and currently fails some pre-release fbcode tests. See https://www.internalfb.com/diff/D37772011. API calls to construct ClockCache are done via the function NewClockCache. For now, NewClockCache calls will return an LRUCache (with appropriate arguments), which is stable.
The idea that NewClockCache returns nullptr was also floated, but this would be interpreted as unsupported cache, and a default LRUCache would be constructed instead, potentially causing a performance regression that is harder to identify.
A new version of the NewClockCache function was created for our internal tests.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10351
Test Plan: ``make -j24 check`` and re-run the pre-release tests.
Reviewed By: pdillinger
Differential Revision: D37802685
Pulled By: guidotag
fbshipit-source-id: 0a8d10612ff21e576f7360cb13e20bc36e244972
Summary:
InternalKeyComparator is a thin wrapper around user comparator. Storing a string for name is relatively expensive to this small wrapper for both CPU and memory usage. Try to remove it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10343
Test Plan: Run existing tests
Reviewed By: ajkr
Differential Revision: D37772469
fbshipit-source-id: d2d106a8d022193058fd7f6b220108e3d94aca34
Summary:
as title.
Test plan
- make check
- CI on PR
- TEST_TMPDIR=/dev/shm make crash_test_with_multiops_wp_txn (tested with successful run)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10350
Reviewed By: ajkr
Differential Revision: D37792872
Pulled By: riversand963
fbshipit-source-id: ff064093b7f715d0acf387af2e3ae87b1278b52b
Summary:
Add `ReserveThreads` and `ReleaseThreads` functions in thread pool to support reservation in for a specific thread pool. With this feature, a thread will be blocked if the number of waiting threads (noted by `num_waiting_threads_`) equals the number of reserved threads (noted by `reserved_threads_`), normally `reserved_threads_` is upper bounded by `num_waiting_threads_`; in rare cases (e.g. `SetBackgroundThreadsInternal` is called when some threads are already reserved), `num_waiting_threads_` can be less than `reserved_threads`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10278
Test Plan: Add `ReserveThreads` unit test in `env_test`. Update the unit test `SimpleColumnFamilyInfoTest` in `thread_list_test` with adding `ReserveThreads` related assertions.
Reviewed By: hx235
Differential Revision: D37640946
Pulled By: littlepig2013
fbshipit-source-id: 4d691f6b9a433569f96ab52d52c3defe5b065367
Summary:
The blob cache enables an optimization on the read path: when a blob is found in the cache, we can avoid copying it into the buffer provided by the application. Instead, we can simply transfer ownership of the cache handle to the target `PinnableSlice`. (Note: this relies on the `Cleanable` interface, which is implemented by `PinnableSlice`.)
This has the potential to save a lot of CPU, especially with large blob values.
This task is a part of https://github.com/facebook/rocksdb/issues/10156
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10297
Reviewed By: riversand963
Differential Revision: D37640311
Pulled By: gangliao
fbshipit-source-id: 92de0e35cc703d06c87c5c1861cc2899ec52234a
Summary:
When an element is first inserted into the ClockCache, it is now assigned either medium or high clock priority, depending on whether its cache priority is low or high, respectively. This is a variant of LRUCache's midpoint insertions. The main difference is that LRUCache can specify the allocated capacity for high-priority elements via the ``high_pri_pool_ratio`` parameter. Contrarily, in ClockCache, low- and high-priority elements compete for all cache slots, and one group can take over the other (of course, it takes more low-priority insertions to push out high-priority elements). However, just as LRUCache, ClockCache provides the following guarantee: a high-priority element will not be evicted before a low-priority element that was inserted earlier in time.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10305
Test Plan: ``make -j24 check``
Reviewed By: pdillinger
Differential Revision: D37607787
Pulled By: guidotag
fbshipit-source-id: 24d9f2523d2f4e6415e7f0029cc061fa275c2040
Summary:
Earlier implementation of cutting the output files with a compact cursor under Round-Robin priority uses `Valid()` to determine if the `output_split_key` is valid in `ShouldStopBefore`. This contributes to excessive CPU computation, as pointed out by [this issue](https://github.com/facebook/rocksdb/issues/10315). In this PR, we change the type of `output_split_key` to be `InternalKey*` and set it as `nullptr` if it is not going to be used in `ShouldStopBefore`, `Valid()` condition checking can be avoided using that pointer.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10316
Reviewed By: ajkr
Differential Revision: D37661492
Pulled By: littlepig2013
fbshipit-source-id: 66ff1105f3378e5573d3a126fdaff9bb23b5498f
Summary:
I noticed it would clean up some things to have Cache::Insert()
return our MemoryLimit Status instead of Incomplete for the case in
which the capacity limit is reached. I suspect this fixes some existing but
unknown bugs where this Incomplete could be confused with other uses
of Incomplete, especially no_io cases. This is the most suspicious case I
noticed, but was not able to reproduce a bug, in part because the existing
code is not covered by unit tests (FIXME added): 57adbf0e91/table/get_context.cc (L397)
I audited all the existing uses of IsIncomplete and updated those that
seemed relevant.
HISTORY updated with a clear warning to users of strict_capacity_limit=true
to update uses of `IsIncomplete()`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10262
Test Plan: updated unit tests
Reviewed By: hx235
Differential Revision: D37473155
Pulled By: pdillinger
fbshipit-source-id: 4bd9d9353ccddfe286b03ebd0652df8ce20f99cb
Summary:
This allows users to pass their git command with extra options if necessary.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10318
Reviewed By: ajkr
Differential Revision: D37661175
Pulled By: lth
fbshipit-source-id: 2a7cf27626c74f167471e6ec57e3870630a582b0
Summary:
…ta blocks
During MyShadow testing, ajkr helped me find out that with partitioned index and dictionary compression enabled, `PartitionedIndexIterator::InitPartitionedIndexBlock()` spent considerable amount of time (1-2% CPU) on fetching uncompression dictionary. Fetching uncompression dict was not needed since the index blocks were not compressed (and even if they were, they use empty dictionary). This should only affect use cases with partitioned index, dictionary compression and without uncompression dictionary pinned. This PR updates NewDataBlockIterator to not fetch uncompression dictionary when it is not for data blocks.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10310
Test Plan:
1. `make check`
2. Perf benchmark: 1.5% (143950 -> 146176) improvement in op/sec for partitioned index + dict compression benchmark.
For default config without partitioned index and without dict compression, there is no regression in readrandom perf from multiple runs of db_bench.
```
# Set up for partitioned index with dictionary compression
TEST_TMPDIR=/dev/shm ./db_bench_main -benchmarks=filluniquerandom,compact -max_background_jobs=24 -memtablerep=vector -allow_concurrent_memtable_write=false -partition_index=true -compression_max_dict_bytes=16384 -compression_zstd_max_train_bytes=1638400
# Pre PR
TEST_TMPDIR=/dev/shm ./db_bench_main -use_existing_db=true -benchmarks=readrandom[-X50] -partition_index=true
readrandom [AVG 50 runs] : 143950 (± 1108) ops/sec; 15.9 (± 0.1) MB/sec
readrandom [MEDIAN 50 runs] : 144406 ops/sec; 16.0 MB/sec
# Post PR
TEST_TMPDIR=/dev/shm ./db_bench_opt -use_existing_db=true -benchmarks=readrandom[-X50] -partition_index=true
readrandom [AVG 50 runs] : 146176 (± 1121) ops/sec; 16.2 (± 0.1) MB/sec
readrandom [MEDIAN 50 runs] : 146014 ops/sec; 16.2 MB/sec
# Set up for no partitioned index and no dictionary compression
TEST_TMPDIR=/dev/shm/baseline ./db_bench_main -benchmarks=filluniquerandom,compact -max_background_jobs=24 -memtablerep=vector -allow_concurrent_memtable_write=false
# Pre PR
TEST_TMPDIR=/dev/shm/baseline/ ./db_bench_main --use_existing_db=true "--benchmarks=readrandom[-X50]"
readrandom [AVG 50 runs] : 158546 (± 1000) ops/sec; 17.5 (± 0.1) MB/sec
readrandom [MEDIAN 50 runs] : 158280 ops/sec; 17.5 MB/sec
# Post PR
TEST_TMPDIR=/dev/shm/baseline/ ./db_bench_opt --use_existing_db=true "--benchmarks=readrandom[-X50]"
readrandom [AVG 50 runs] : 161061 (± 1520) ops/sec; 17.8 (± 0.2) MB/sec
readrandom [MEDIAN 50 runs] : 161596 ops/sec; 17.9 MB/sec
```
Reviewed By: ajkr
Differential Revision: D37631358
Pulled By: cbi42
fbshipit-source-id: 6ca2665e270e63871968e061ba4a99d3136785d9
Summary:
Before this PR, we call `now()` to get the wall time before performing point-lookup and range
scans when user-defined timestamp is enabled.
With this PR, we expand the coverage to:
- read with an older timestamp which is larger then the wall time when the process starts but potentially smaller than now()
- add coverage for `ReadOptions::iter_start_ts != nullptr`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10280
Test Plan:
```bash
make check
```
Also,
```bash
TEST_TMPDIR=/dev/shm/rocksdb make crash_test_with_ts
```
So far, we have had four successful runs of the above
In addition,
```bash
TEST_TMPDIR=/dev/shm/rocksdb make crash_test
```
Succeeded twice showing no regression.
Reviewed By: ltamasi
Differential Revision: D37539805
Pulled By: riversand963
fbshipit-source-id: f2d9887ad95245945ce17a014d55bb93f00e1cb5
Summary:
Previously the version was displayed as $major.$minor
This changes it to $major.$minor.$path
This also adds the git hash for the time from which RocksDB was built to the end of report.tsv. I confirmed that benchmark_log_tool.py still parses it and that the people
who consume/graph these results are OK with it.
Example output:
ops_sec mb_sec lsm_sz blob_sz c_wgb w_amp c_mbps c_wsecs c_csecs b_rgb b_wgb usec_op p50 p99 p99.9 p99.99 pmax uptime stall% Nstall u_cpu s_cpu rss test date version job_id githash
609488 244.1 1GB 0.0GB, 1.4 0.7 93.3 39 38 0 0 1.6 1.0 4 15 26 5365 15 0.0 0 0.1 0.0 0.5 fillseq.wal_disabled.v400 2022-06-29T13:36:05 7.5.0 6115254416
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10277
Test Plan: Run it
Reviewed By: jay-zhuang
Differential Revision: D37532418
Pulled By: mdcallag
fbshipit-source-id: 55e472640d51265819b228d3373c9fa9b62b660d
Summary:
In leveled compaction, try to trivial move more than one files if possible, up to 4 files or max_compaction_bytes. This is to allow higher write throughput for some use cases where data is loaded in sequential order, where appying compaction results is the bottleneck.
When pick up a file to compact and it doesn't have overlapping files in the next level, try to expand to the next file if there is still no overlapping.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10190
Test Plan:
Add some unit tests.
For performance, Try to run
./db_bench_multi_move --benchmarks=fillseq --compression_type=lz4 --write_buffer_size=5000000 --num=100000000 --value_size=1000 -level_compaction_dynamic_level_bytes
Together with https://github.com/facebook/rocksdb/pull/10188 , stalling will be eliminated in this benchmark.
Reviewed By: jay-zhuang
Differential Revision: D37230647
fbshipit-source-id: 42b260f545c46abc5d90335ac2bbfcd09602b549
Summary:
Before this PR, it is required that application open RocksDB secondary
instance with `max_open_files = -1`. This is a hacky workaround that
prevents IOErrors on the seconary instance during point-lookup or range
scan caused by primary instance deleting the table files. This is not
necessary if the application can coordinate the primary and secondaries
so that primary does not delete files that are still being used by the
secondaries. Or users can provide a custom Env/FS implementation that
deletes the files only after all primary and secondary instances
indicate files are obsolete and deleted.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10260
Test Plan: make check
Reviewed By: jay-zhuang
Differential Revision: D37462633
Pulled By: riversand963
fbshipit-source-id: 9c2fc939f49663efa61e3d60c8f1e01d64b9d72c
Summary:
In some case, GetFileSize would be failure in copy_file_cb.
If failure, we can return immediately, the subsequent code
is meaningless, and add a log info let user know that problem
happen here.
Singed-off-by: Yite Gu <ess_gyt@qq.com>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10176
Reviewed By: cbi42
Differential Revision: D37510888
Pulled By: ajkr
fbshipit-source-id: 044ad8c45852fd19b8cd564b11f65d40c39e296f
Summary:
We fix two bugs in CalcHashBits. The first one is an off-by-one error: the desired number of table slots is the real number ``capacity / (kLoadFactor * handle_charge)``, which should not be rounded down. The second one is that we should disallow inputs that set the element charge to 0, namely ``estimated_value_size == 0 && metadata_charge_policy == kDontChargeCacheMetadata``.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10295
Test Plan: CalcHashBits is tested by CalcHashBitsTest (in lru_cache_test.cc). The test now iterates over many more inputs; it covers, in particular, the rounding error edge case. Overall, the test is now more robust. Run ``make -j24 check``.
Reviewed By: pdillinger
Differential Revision: D37573797
Pulled By: guidotag
fbshipit-source-id: ea4f4439f7196ab1c1afb88f566fe92850537262
Summary:
If dbname and db_log_dir are at different filesystems (one
local and one remote), creation of dbname will fail because that path
doesn't exist wrt to db_log_dir.
This patch will ignore the error returned on creation of dbname. If they
are on same filesystem, db_log_dir creation will automatically return
the error in case there is any error in creation of dbname.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10292
Test Plan: Existing unit tests
Reviewed By: riversand963
Differential Revision: D37567773
Pulled By: akankshamahajan15
fbshipit-source-id: 005d28c536208d4c126c8cb8e196d1d85b881100
Summary:
In leveled compaction, L0->L1 trivial move will allow more than one file to be moved in one compaction. This would allow L0 files to be moved down faster when data is loaded in sequential order, making slowdown or stop condition harder to hit. Also seek L0->L1 trivial move when only some files qualify.
1. We always try to find L0->L1 trivial move from the oldest files. Keep including newer files, until adding a new file won't trigger a trivial move
2. Modify the trivial move condition so that this compaction would be tagged as trivial move.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10188
Test Plan:
See throughput improvements with db_bench with fast fillseq benchmark and small L0 files:
./db_bench_l0_move --benchmarks=fillseq --compression_type=lz4 --write_buffer_size=5000000 --num=100000000 --value_size=1000 -level_compaction_dynamic_level_bytes
The throughput improved by about 50%. Stalling still happens though.
Reviewed By: jay-zhuang
Differential Revision: D37224743
fbshipit-source-id: 8958d97f22e12bdfc14d2e85930f6fa0070e9659
Summary:
In round-robin compaction priority, when splitting the compaction into sub-compactions, the earlier implementation takes into account the compact cursor to have full use of available sub-compactions. But this may result in unbalanced sub-compactions, so we remove this here. The removal does not affect the cursor-based splitting mechanism within a sub-compaction, and thus the output files are still ensured to be split according to the cursor.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10289
Reviewed By: ajkr
Differential Revision: D37559091
Pulled By: littlepig2013
fbshipit-source-id: b8b45b99f63b09cf873f7f049bcb4ab13871fffc
Summary:
This adds --undefok to support use of this script with BlobDB for db_bench versions prior
to 7.5 when the options land in a release.
While there is a limit to how far back this script can go WRT backwards compatiblity,
this is an easy change to support early 7.x releases.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10276
Test Plan: Run it with versions of db_bench that do not and then do support these options
Reviewed By: gangliao
Differential Revision: D37529299
Pulled By: mdcallag
fbshipit-source-id: 7bb1feec5c68760e6d64792c585bfbde4f5e52d8
Summary:
The current level targets for dynamical leveling has a problem: the target level size will dramatically change after a L0->L1 compaction. When there are many L0 bytes, lower level compactions are delayed, but they will be resumed after the L0->L1 compaction finishes, so the expected write amplification benefits might not be realized. The proposal here is to revert the level targetting size, but instead relying on adjusting score for each level to prioritize levels that need to compact most.
Basic idea:
(1) target level size isn't adjusted, but score is adjusted. The reasoning is that with parallel compactions, holding compactions from happening might not be desirable, but we would like the compactions are scheduled from the level we feel most needed. For example, if we have a extra-large L2, we would like all compactions are scheduled for L2->L3 compactions, rather than L4->L5. This gets complicated when a large L0->L1 compaction is going on. Should we compact L2->L3 or L4->L5. So the proposal for that is:
(2) the score is calculated by actual level size / (target size + estimated upper bytes coming down). The reasoning is that if we have a large amount of pending L0/L1 bytes coming down, compacting L2->L3 might be more expensive, as when the L0 bytes are compacted down to L2, the actual L2->L3 fanout would change dramatically. On the other hand, when the amount of bytes coming down to L5, the impacts to L5->L6 fanout are much less. So when calculating target score, we can adjust it by adding estimated downward bytes to the target level size.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10057
Test Plan: Repurpose tests VersionStorageInfoTest.MaxBytesForLevelDynamicWithLargeL0_* tests to cover this scenario.
Reviewed By: ajkr
Differential Revision: D37539742
fbshipit-source-id: 9c154cbfe92023f918cf5d80875d8776ad4831a4
Summary:
- [x] Enabled blob caching for MultiGetBlob in RocksDB
- [x] Refactored MultiGetBlob logic and interface in RocksDB
- [x] Cleaned up Version::MultiGetBlob() and moved 'blob'-related code snippets into BlobSource
- [x] Add End-to-end test cases in db_blob_basic_test and also add unit tests in blob_source_test
This task is a part of https://github.com/facebook/rocksdb/issues/10156
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10272
Reviewed By: ltamasi
Differential Revision: D37558112
Pulled By: gangliao
fbshipit-source-id: a73a6a94ffdee0024d5b2a39e6d1c1a7d38664db
Summary:
Add load_latest_options() to C api.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10152
Test Plan:
Extend the existing c_test by reopening db using the latest options file
at different parts of the test.
Reviewed By: akankshamahajan15
Differential Revision: D37305225
Pulled By: ajkr
fbshipit-source-id: 8b3bab73f56fa6fcbdba45aae393145d007b3962
Summary:
If the internal iterator is not valid, `SeekToLast` with iter_start_ts should have `valid_` is false without assertion failure.
Test plan
make check
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10279
Reviewed By: ltamasi
Differential Revision: D37539393
Pulled By: riversand963
fbshipit-source-id: 8e94057838f8a05144fad5768f4d62f1893ec315