Summary:
**Context:**
Previous PR https://github.com/facebook/rocksdb/pull/9748, https://github.com/facebook/rocksdb/pull/9073, https://github.com/facebook/rocksdb/pull/8428 added separate flag for each charged memory area. Such API design is not scalable as we charge more and more memory areas. Also, we foresee an opportunity to consolidate this feature with other cache usage related features such as `cache_index_and_filter_blocks` using `CacheEntryRole`.
Therefore we decided to consolidate all these flags with `CacheUsageOptions cache_usage_options` and this PR serves as the first step by consolidating memory-charging related flags.
**Summary:**
- Replaced old API reference with new ones, including making `kCompressionDictionaryBuildingBuffer` opt-out and added a unit test for that
- Added missing db bench/stress test for some memory charging features
- Renamed related test suite to indicate they are under the same theme of memory charging
- Refactored a commonly used mocked cache component in memory charging related tests to reduce code duplication
- Replaced the phrases "memory tracking" / "cache reservation" (other than CacheReservationManager-related ones) with "memory charging" for standard description of this feature.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9926
Test Plan:
- New unit test for opt-out `kCompressionDictionaryBuildingBuffer` `TEST_F(ChargeCompressionDictionaryBuildingBufferTest, Basic)`
- New unit test for option validation/sanitization `TEST_F(CacheUsageOptionsOverridesTest, SanitizeAndValidateOptions)`
- CI
- db bench (in case querying new options introduces regression) **+0.5% micros/op**: `TEST_TMPDIR=/dev/shm/testdb ./db_bench -benchmarks=fillseq -db=$TEST_TMPDIR -charge_compression_dictionary_building_buffer=1(remove this for comparison) -compression_max_dict_bytes=10000 -disable_auto_compactions=1 -write_buffer_size=100000 -num=4000000 | egrep 'fillseq'`
#-run | (pre-PR) avg micros/op | std micros/op | (post-PR) micros/op | std micros/op | change (%)
-- | -- | -- | -- | -- | --
10 | 3.9711 | 0.264408 | 3.9914 | 0.254563 | 0.5111933721
20 | 3.83905 | 0.0664488 | 3.8251 | 0.0695456 | **-0.3633711465**
40 | 3.86625 | 0.136669 | 3.8867 | 0.143765 | **0.5289363078**
- db_stress: `python3 tools/db_crashtest.py blackbox -charge_compression_dictionary_building_buffer=1 -charge_filter_construction=1 -charge_table_reader=1 -cache_size=1` killed as normal
Reviewed By: ajkr
Differential Revision: D36054712
Pulled By: hx235
fbshipit-source-id: d406e90f5e0c5ea4dbcb585a484ad9302d4302af
Summary:
**Context/Summary:**
The logic related to PositionedRead in SequentialFileReader::Read confused me a bit as discussed here https://github.com/facebook/rocksdb/pull/9973#discussion_r872869256. Therefore I added a drawing with help from cbi42.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10002
Test Plan: - no code change
Reviewed By: anand1976, cbi42
Differential Revision: D36422632
Pulled By: hx235
fbshipit-source-id: 9a8311d2365564f90d216c430f542fc11b2d9cde
Summary:
Changed the static objects that had non-trivial destructors to use the STATIC_AVOID_DESTRUCTION construct.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9958
Reviewed By: pdillinger
Differential Revision: D36442982
Pulled By: mrambacher
fbshipit-source-id: 029d47b1374d30d198bfede369a4c0ae7a4eb519
Summary:
PR https://github.com/facebook/rocksdb/issues/9888 started to enforce the contract of single delete described in https://github.com/facebook/rocksdb/wiki/Single-Delete.
For some of existing use cases, it is desirable to have a transition during which compaction will not fail
if the contract is violated. Therefore, we add a temporary option `enforce_single_del_contracts` to allow
application to opt out from this new strict behavior. Once transition completes, the flag can be set to `true` again.
In a future release, the option will be removed.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9983
Test Plan: make check
Reviewed By: ajkr
Differential Revision: D36333672
Pulled By: riversand963
fbshipit-source-id: dcb703ea0ed08076a1422f1bfb9914afe3c2caa2
Summary:
**Context:**
`BackupEngineRateLimitingTestWithParam.RateLimiting` and `BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup` involve creating backup and restoring of a big database with rate-limiting. Using the normal env with a normal clock requires real elapse of time (13702 - 19848 ms/per test). As suggested in https://github.com/facebook/rocksdb/pull/8722#discussion_r703698603, this PR is to speed it up with SpecialEnv (`time_elapse_only_sleep=true`) where its clock accepts fake elapse of time during rate-limiting (100 - 600 ms/per test)
**Summary:**
- Added TEST_ function to set clock of the default rate limiters in backup engine
- Shrunk testdb by 10 times while keeping it big enough for testing
- Renamed some test variables and reorganized some if-else branch for clarity without changing the test
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9974
Test Plan:
- Run tests pre/post PR the same time to verify the tests are sped up by 90 - 95%
`BackupEngineRateLimitingTestWithParam.RateLimiting`
Pre:
```
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/0
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/0 (11123 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/1
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/1 (9441 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/2
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/2 (11096 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/3
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/3 (9339 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/4
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/4 (11121 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/5
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/5 (9413 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/6
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/6 (11185 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/7
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/7 (9511 ms)
[----------] 8 tests from RateLimiting/BackupEngineRateLimitingTestWithParam (82230 ms total)
```
Post:
```
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/0
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/0 (395 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/1
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/1 (564 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/2
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/2 (358 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/3
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/3 (567 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/4
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/4 (173 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/5
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/5 (176 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/6
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/6 (191 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/7
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimiting/7 (177 ms)
[----------] 8 tests from RateLimiting/BackupEngineRateLimitingTestWithParam (2601 ms total)
```
`BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup`
Pre:
```
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/0
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/0 (7275 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/1
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/1 (3961 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/2
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/2 (7117 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/3
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/3 (3921 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/4
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/4 (19862 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/5
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/5 (10231 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/6
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/6 (19848 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/7
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/7 (10372 ms)
[----------] 8 tests from RateLimiting/BackupEngineRateLimitingTestWithParam (82587 ms total)
```
Post:
```
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/0
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/0 (157 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/1
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/1 (152 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/2
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/2 (160 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/3
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/3 (158 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/4
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/4 (155 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/5
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/5 (151 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/6
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/6 (146 ms)
[ RUN ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/7
[ OK ] RateLimiting/BackupEngineRateLimitingTestWithParam.RateLimitingVerifyBackup/7 (153 ms)
[----------] 8 tests from RateLimiting/BackupEngineRateLimitingTestWithParam (1232 ms total)
```
Reviewed By: pdillinger
Differential Revision: D36336345
Pulled By: hx235
fbshipit-source-id: 724c6ba745f95f56d4440a6d2f1e4512a2987589
Summary:
These methods allow for more thorough testing of the ObjectRegistry and Customizable infrastructure in a simpler manner. With this change, the Customizable tests can now check what factories are registered and attempt to create each of them in a systematic fashion.
With this change, I think all of the factories registered with the ObjectRegistry/CreateFromString are now tested via the customizable_test classes.
Note that there were a few other minor changes. There was a "posix://*" register with the ObjectRegistry which was missed during the PatternEntry conversion -- these changes found that. The nickname and default names for the FileSystem classes was also inverted.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9358
Reviewed By: pdillinger
Differential Revision: D33433542
Pulled By: mrambacher
fbshipit-source-id: 9a32da74e6620745b4eeffb2712be70eeeadfa7e
Summary:
In one path of BlockBasedTable::MultiGet(), Next() is directly called after calling Seek() against the index iterator. This might cause crash if an I/O error happens in Seek().
The bug is discovered in crash test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9993
Test Plan: See existing CI tests pass.
Reviewed By: anand1976
Differential Revision: D36381758
fbshipit-source-id: a11e0aa48dcee168c2554c33b532646ffdb68877
Summary:
This PR
- since we are testing with disable_wal = true and best_efforts_recovery, we should set column family count to 1, due to the requirement of `ExpectedState` tracking and replaying logic.
- during backup and checkpoint restore, disable best-efforts-recovery. This does not matter now because db_crashtest.py always disables wal when testing best-efforts-recovery. In the future, if we enable wal, then not setting `restore_opitions.best_efforts_recovery` will cause backup db not to recover the WALs, and differ from db (that enables WAL).
- during verification of backup and checkpoint restore, print the key where inconsistency exists between expected state and db.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9986
Test Plan: TEST_TMPDIR=/dev/shm/rocksdb make crash_test_with_best_efforts_recovery
Reviewed By: siying
Differential Revision: D36353105
Pulled By: riversand963
fbshipit-source-id: a484da161273e6216a1f7e245bac15a349693917
Summary:
Add methods to set the various functions (Parse, Serialize, Equals) to the OptionTypeInfo. These methods simplify the number of constructors required for OptionTypeInfo and make the code a little clearer.
Add functions to the OptionTypeInfo for Prepare and Validate. These methods allow types other than Configurable and Customizable to have Prepare and Validate logic. These methods could be used by an option to guarantee that its settings were in a range or that a value was initialized.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9411
Reviewed By: pdillinger
Differential Revision: D36174849
Pulled By: mrambacher
fbshipit-source-id: 72517d8c6bab4723788a4c1a9e16590bff870125
Summary:
... for better maintainability, in case of Makefile changes /
refactoring. This is lightly modified from rocksd-lego-determinator, and
will be used by Meta-internal CI with custom REPORT_BUILD_STATISTIC
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9989
Test Plan: some manual stuff
Reviewed By: jay-zhuang
Differential Revision: D36362362
Pulled By: pdillinger
fbshipit-source-id: 52b65b6282fe839dc6d906ff95a3ed66ca1574ba
Summary:
This change adds pmem-rocksdb-plugin link in PLUGINS.md. The link is: https://github.com/pmem/pmem-rocksdb-plugin. It provides a collection plugins to enable Persistent Memory (PMEM) on RocksDB.
The pmem-rocksdb-plugin repo contains RocksDB’s plugins for LSM-tree based KV store to fit it on the PMEM by effectively utilize its characteristics. The first two basic plugins are:
1) Providing a filesystem API wrapper to write RocksDB's WAL (Write Ahead Log) files on PMEM to optimize write performance. 2) Using PMEM as secondary cache to optimize read performance.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9934
Reviewed By: jay-zhuang
Differential Revision: D36366893
Pulled By: riversand963
fbshipit-source-id: d58a39365e9b5d6a3249d4e9b377c7fb2c79badb
Summary:
The batched version of MultiGet() is not available in RocksDB's C API.
This PR implements rocksdb_batched_multi_get_cf which is a C wrapper function
that invokes the batched version of MultiGet() which takes one single column family.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9952
Test Plan: Added a new test case under "columnfamilies" test case in c_test.cc
Reviewed By: riversand963
Differential Revision: D36302888
Pulled By: ajkr
fbshipit-source-id: fa134c4a1c8e7d72dd4ae8649a74e3797b5cf4e6
Summary:
In case of non-TransactionDB and avoid_flush_during_recovery = true, RocksDB won't
flush the data from WAL to L0 for all column families if possible. As a
result, not all column families can increase their log_numbers, and
min_log_number_to_keep won't change.
For transaction DB (.allow_2pc), even with the flush, there may be old WAL files that it must not delete because they can contain data of uncommitted transactions and min_log_number_to_keep won't change.
If we persist a new MANIFEST with
advanced log_numbers for some column families, then during a second
crash after persisting the MANIFEST, RocksDB will see some column
families' log_numbers larger than the corrupted WAL, and the "column family inconsistency" error will be hit, causing recovery to fail.
This PR update unit tests to emulate the errors and tests are failing without a fix.
Error:
```
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/0
db/corruption_test.cc:1190: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF test_cf
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/0, where GetParam() = (true, false) (91 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/1
db/corruption_test.cc:1190: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF test_cf
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/1, where GetParam() = (false, false) (92 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/2
db/corruption_test.cc:1190: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF test_cf
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/2, where GetParam() = (true, true) (95 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/3
db/corruption_test.cc:1190: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF test_cf
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecovery/3, where GetParam() = (false, true) (92 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/0
db/corruption_test.cc:1354: Failure
TransactionDB::Open(options, txn_db_opts, dbname_, cf_descs, &handles, &txn_db)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/0, where GetParam() = (true, false) (94 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/1
db/corruption_test.cc:1354: Failure
TransactionDB::Open(options, txn_db_opts, dbname_, cf_descs, &handles, &txn_db)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/1, where GetParam() = (false, false) (97 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/2
db/corruption_test.cc:1354: Failure
TransactionDB::Open(options, txn_db_opts, dbname_, cf_descs, &handles, &txn_db)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/2, where GetParam() = (true, true) (94 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/3
db/corruption_test.cc:1354: Failure
TransactionDB::Open(options, txn_db_opts, dbname_, cf_descs, &handles, &txn_db)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.TxnDbCrashDuringRecovery/3, where GetParam() = (false, true) (91 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/0
db/corruption_test.cc:1483: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/0, where GetParam() = (true, false) (93 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/1
db/corruption_test.cc:1483: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/1, where GetParam() = (false, false) (94 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/2
db/corruption_test.cc:1483: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/2, where GetParam() = (true, true) (90 ms)
[ RUN ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/3
db/corruption_test.cc:1483: Failure
DB::Open(options, dbname_, cf_descs, &handles, &db_)
Corruption: SST file is ahead of WALs in CF default
[ FAILED ] CorruptionTest/CrashDuringRecoveryWithCorruptionTest.CrashDuringRecoveryWithFlush/3, where GetParam() = (false, true) (93 ms)
[----------] 12 tests from CorruptionTest/CrashDuringRecoveryWithCorruptionTest (1116 ms total)
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9942
Test Plan: Not needed
Reviewed By: riversand963
Differential Revision: D36324112
Pulled By: akankshamahajan15
fbshipit-source-id: cab2075ac4ebe48f5ef93a6ea162558aa4fc334d
Summary:
Condense down to 8 groups rather than 20+ for ease of browsing
pages like
https://app.circleci.com/pipelines/github/facebook/rocksdb?branch=main&filter=all
Also, run nightly builds at 1AM or 2AM Pacific (depending on daylight
time) rather than 4PM or 5PM Pacific, so that they actually use each
day's landed changes.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9981
Test Plan:
CI
And manually inspected
```
grep -Eo 'build-[^: ]*' .circleci/config.yml | sort | uniq -c | less
```
to ensure I didn't orphan anything
Reviewed By: jay-zhuang
Differential Revision: D36317634
Pulled By: pdillinger
fbshipit-source-id: 1c10d29d6b5d60ce3dd1364cd91f175380075ff3
Summary:
Example failure when compiling on sufficiently new hardware and built-in headers:
```
In file included from /usr/local/lib/gcc/x86_64-linux-gnu/12.1.0/include/immintrin.h:49,
from ./util/bloom_impl.h:21,
from table/block_based/filter_policy.cc:31,
from unity.cc:167:
In function '__m512i _mm512_shuffle_epi32(__m512i, _MM_PERM_ENUM)',
inlined from 'void XXH3_accumulate_512_avx512(void*, const void*, const void*)' at util/xxhash.h:3605:58,
inlined from 'void XXH3_accumulate(xxh_u64*, const xxh_u8*, const xxh_u8*, size_t, XXH3_f_accumulate_512)' at util/xxhash.h:4229:17,
inlined from 'void XXH3_hashLong_internal_loop(xxh_u64*, const xxh_u8*, size_t, const xxh_u8*, size_t, XXH3_f_accumulate_512, XXH3_f_scrambleAcc)' at util/xxhash.h:4251:24,
inlined from 'XXH128_hash_t XXH3_hashLong_128b_internal(const void*, size_t, const xxh_u8*, size_t, XXH3_f_accumulate_512, XXH3_f_scrambleAcc)' at util/xxhash.h:5065:32,
inlined from 'XXH128_hash_t XXH3_hashLong_128b_withSecret(const void*, size_t, XXH64_hash_t, const void*, size_t)' at util/xxhash.h:5104:39:
/usr/local/lib/gcc/x86_64-linux-gnu/12.1.0/include/avx512fintrin.h:4459:50: error: '__Y' may be used uninitialized [-Werror=maybe-uninitialized]
```
https://app.circleci.com/pipelines/github/facebook/rocksdb/13295/workflows/1695fb5c-40c1-423b-96b4-45107dc3012d/jobs/360416
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9978
Test Plan:
I was able to re-run in CircleCI with ssh, see the failure, ssh in and
verify that adding -fno-avx512f fixed the failure. Will watch build-linux-unity-and-headers
Reviewed By: riversand963
Differential Revision: D36296028
Pulled By: pdillinger
fbshipit-source-id: ba5955cf2ac730f57d1d18c2f517e92f34be77a3
Summary:
CircleCI was using a soft open file limit of 1024 which would
frequently be exceeded during test runs. Now using
```
ulimit -S -n `ulimit -H -n`
```
to set soft limit up to the hard limit (524288 in my test). I've also
applied this same idiom to existing applicable MacOS configurations to
reduce hard-coding numbers.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9972
Test Plan: CI
Reviewed By: riversand963
Differential Revision: D36262943
Pulled By: pdillinger
fbshipit-source-id: 86320cdf9b68a97fdb73531a7b4a59b4c2d2f73f
Summary:
The new microbenchmarks, DBGetMergeOperandsInMemtable and DBGetMergeOperandsInSstFile, correspond to the two different LSMs tested: all data in one memtable and all data in one SST file, respectively. Both cases are parameterized by thread count (1 or 8) and merge operands per key (1, 32, or 1024). The SST file case is additionally parameterized by whether data is in block cache or mmap'd memory.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9971
Test Plan:
```
$ TEST_TMPDIR=/dev/shm/db_basic_bench/ ./db_basic_bench --benchmark_filter=DBGetMergeOperands
The number of inputs is very large. DBGet will be repeated at least 192 times.
The number of inputs is very large. DBGet will be repeated at least 192 times.
2022-05-09T13:15:40-07:00
Running ./db_basic_bench
Run on (36 X 2570.91 MHz CPU s)
CPU Caches:
L1 Data 32 KiB (x18)
L1 Instruction 32 KiB (x18)
L2 Unified 1024 KiB (x18)
L3 Unified 25344 KiB (x1)
Load Average: 4.50, 4.33, 4.37
----------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
----------------------------------------------------------------------------------------------------------------------------
DBGetMergeOperandsInMemtable/entries_per_key:1/threads:1 846 ns 846 ns 849893 db_size=0
DBGetMergeOperandsInMemtable/entries_per_key:32/threads:1 2436 ns 2436 ns 305779 db_size=0
DBGetMergeOperandsInMemtable/entries_per_key:1024/threads:1 77226 ns 77224 ns 8152 db_size=0
DBGetMergeOperandsInMemtable/entries_per_key:1/threads:8 116 ns 929 ns 779368 db_size=0
DBGetMergeOperandsInMemtable/entries_per_key:32/threads:8 330 ns 2644 ns 280824 db_size=0
DBGetMergeOperandsInMemtable/entries_per_key:1024/threads:8 12466 ns 99718 ns 7200 db_size=0
DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:0/threads:1 1640 ns 1640 ns 461262 db_size=21.7826M
DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:1/threads:1 1693 ns 1693 ns 439936 db_size=21.7826M
DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:0/threads:1 3999 ns 3999 ns 172881 db_size=19.6981M
DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:1/threads:1 5544 ns 5543 ns 135657 db_size=19.6981M
DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:0/threads:1 78767 ns 78761 ns 8395 db_size=19.6389M
DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:1/threads:1 157242 ns 157238 ns 4495 db_size=19.6389M
DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:0/threads:8 231 ns 1848 ns 347768 db_size=21.7826M
DBGetMergeOperandsInSstFile/entries_per_key:1/mmap:1/threads:8 214 ns 1715 ns 393312 db_size=21.7826M
DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:0/threads:8 596 ns 4767 ns 142088 db_size=19.6981M
DBGetMergeOperandsInSstFile/entries_per_key:32/mmap:1/threads:8 720 ns 5757 ns 118200 db_size=19.6981M
DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:0/threads:8 11613 ns 92460 ns 7344 db_size=19.6389M
DBGetMergeOperandsInSstFile/entries_per_key:1024/mmap:1/threads:8 19989 ns 159908 ns 4440 db_size=19.6389M
```
Reviewed By: jay-zhuang
Differential Revision: D36258861
Pulled By: ajkr
fbshipit-source-id: 04b733e1cc3a4a70ed9baa894c50fdf96c0d6064
Summary:
https://github.com/facebook/rocksdb/issues/9961 broke format_compatible check because of `make clean`
referencing TEST_TMPDIR. The Makefile behavior seems reasonable to me,
so here's a fix in check_format_compatible.sh
Apparently I also included removing a redundant part of our CircleCI config.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9970
Test Plan: manual run: SHORT_TEST=1 ./tools/check_format_compatible.sh
Reviewed By: riversand963
Differential Revision: D36258172
Pulled By: pdillinger
fbshipit-source-id: d46507f04614e888b414ff23b88d040ae2b5c294
Summary:
Removing unnecessary checks around conversion from int/long to double as it does not lose information (see https://docs.oracle.com/javase/specs/jls/se9/html/jls-5.html#jls-5.1.2).
For example, `value > Double.MAX_VALUE` is always false when value is long or int.
Can you please have a look adamretter? Also fixed some other minor issues (do you prefer a separate PR?)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9194
Reviewed By: ajkr
Differential Revision: D36221694
fbshipit-source-id: bf327c07386560b87ddc0c98039e8d6e8f2f1e82
Summary:
- For entry charge, we should only calculate the value size instead of including key size in LRUCache
- The capacity of string could show the memory usage precisely
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9337
Reviewed By: ajkr
Differential Revision: D36219855
fbshipit-source-id: 393e48ca419d230dc552ae62dd0eb1cc9f45961d
Summary:
Improve memkind library detection in build_detect_platform:
- The current position of -lmemkind does not work with all versions of gcc
- LDFLAGS allows specifying non-standard library path through EXTRA_LDFLAGS
After the change, the options match TBB detection.
This is a follow-up to https://github.com/facebook/rocksdb/issues/6214.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9134
Reviewed By: ajkr, mrambacher
Differential Revision: D32192028
fbshipit-source-id: 115fafe8d93f1fe6aaf80afb32b2cb67aad074c7
Summary:
Just fixing a very minor typo in the doc block :) Hope it will help anyway 😊
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9331
Reviewed By: riversand963
Differential Revision: D34339823
fbshipit-source-id: b76104bc3efbc9d1f38cbf5c6dd7648dc909ced3
Summary:
Having all of TMPD, TMPDIR and TEST_TMPDIR as configuration
parameters is confusing. This change simplifies a number of things by
standardizing on TEST_TMPDIR, while still recognizing the old names
also. In detail:
* crash_test.mk also needs to use TEST_TMPDIR for crash test, so put in
shared common.mk (an upgrade of python.mk)
* Always exporting TEST_TMPDIR eliminates the need to propagate TMPD or
export TEST_TMPDIR in selective places.
* Use --tmpdir option to gnu_parallel so that it doesn't need TMPDIR
environment variable
* Remove obsolete parloop and parallel_check Makefile targets
* Remove undefined, unused function ResetTmpDirForDirectIO()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9961
Test Plan: manual + CI
Reviewed By: riversand963
Differential Revision: D36212178
Pulled By: pdillinger
fbshipit-source-id: b76c1876c4f4d38b37789c2779eaa7c3026824dd
Summary:
ToString() is created as some platform doesn't support std::to_string(). However, we've already used std::to_string() by mistake for 16 months (in db/db_info_dumper.cc). This commit just remove ToString().
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9955
Test Plan: Watch CI tests
Reviewed By: riversand963
Differential Revision: D36176799
fbshipit-source-id: bdb6dcd0e3a3ab96a1ac810f5d0188f684064471
Summary:
Previously all fault injection was ignored in release mode. This PR adds it back except for read fault injection (`--read_fault_one_in > 0`) since its dependency (`IGNORE_STATUS_IF_ERROR`) is unavailable in release mode.
Other notable changes include:
- Moved `EnableWriteErrorInjection()` for `--write_fault_one_in > 0` so it's after `DB::Open()` without depending on `SyncPoint`
- Made `--read_fault_one_in > 0` return an error in release mode
- Updated `db_crashtest.py` to always set `--read_fault_one_in=0` in release mode
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9957
Test Plan:
```
$ DEBUG_LEVEL=0 make -j24 db_stress
$ DEBUG_LEVEL=0 TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py blackbox
```
Reviewed By: anand1976
Differential Revision: D36193830
Pulled By: ajkr
fbshipit-source-id: 0b97946b4e3f06e3e0f6e7833c2763da08ec5321
Summary:
dont -> don't
refered -> referred
This is a re-run of PR#7785 and acc9679 since these typos keep coming back.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9653
Reviewed By: jay-zhuang
Differential Revision: D34879593
fbshipit-source-id: d7631fb779ea0129beae92abfb838038e60790f8
Summary:
`db_stress` already tracks expected state history to verify prefix-recoverability when `sync_fault_injection` is enabled. This PR enables `sync_fault_injection` in `db_crashtest.py`.
Previously enabling `sync_fault_injection` would cause whole unsynced files to be dropped. This PR adds a more interesting case of losing only the tail of unsynced data by implementing `TestFSWritableFile::RangeSync()` and enabling `{wal_,}bytes_per_sync`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9947
Test Plan:
- regular blackbox, blackbox --simple
- various commands to stress this new case, such as `TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py blackbox --max_key=100000 --write_buffer_size=2097152 --avoid_flush_during_recovery=1 --disable_wal=0 --interval=10 --db_write_buffer_size=0 --sync_fault_injection=1 --wal_compression=none --delpercent=0 --delrangepercent=0 --prefixpercent=0 --iterpercent=0 --writepercent=100 --readpercent=0 --wal_bytes_per_sync=131072 --duration=36000 --sync=0 --open_write_fault_one_in=16`
Reviewed By: riversand963
Differential Revision: D36152775
Pulled By: ajkr
fbshipit-source-id: 44b68a7fad0a4cf74af9fe1f39be01baab8141d8
Summary:
Right now we still don't fully use std::numeric_limits but use a macro, mainly for supporting VS 2013. Right now we only support VS 2017 and up so it is not a problem. The code comment claims that MinGW still needs it. We don't have a CI running MinGW so it's hard to validate. since we now require C++17, it's hard to imagine MinGW would still build RocksDB but doesn't support std::numeric_limits<>.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9954
Test Plan: See CI Runs.
Reviewed By: riversand963
Differential Revision: D36173954
fbshipit-source-id: a35a73af17cdcae20e258cdef57fcf29a50b49e0
Summary:
PR 9929 adds a new CompactionFilter::Decision, i.e.
kRemoveWithSingleDelete so that CompactionFilter can indicate to
CompactionIterator that a PUT can only be removed with SD. However, how
CompactionIterator handles such a key is implementation detail which
should not be implied in the public API. In fact,
such a PUT can just be dropped. This is an optimization which we will apply in the near future.
Discussion thread: https://github.com/facebook/rocksdb/pull/9929#discussion_r863198964
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9951
Test Plan: make check
Reviewed By: ajkr
Differential Revision: D36156590
Pulled By: riversand963
fbshipit-source-id: 7b7d01f47bba4cad7d9cca6ca52984f27f88b372
Summary:
Right now in DumpDBFileSummary, IO error isn't printed out, but they are sometimes helpful. Print it out instead.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9940
Test Plan: Watch existing tests to pass.
Reviewed By: riversand963
Differential Revision: D36113016
fbshipit-source-id: 13002080fa4dc76589e2c1c5a1079df8a3c9391c
Summary:
This is inspired by debugging a regression test that runs for ~0.05 seconds and the short
running time makes it prone to variance. While db_bench ran for ~60 seconds, 59.95 seconds
was spent opening 128 databases (and doing recovery). So it was harder to notice that the
benchmark only ran for 0.05 seconds.
Normally I add output to the end of the line to make life easier for existing tools that parse it
but in this case the output near the end of the line has two optional parts and one of the optional
parts adds an extra newline.
This is for https://github.com/facebook/rocksdb/issues/9856
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9886
Test Plan:
./db_bench --benchmarks=overwrite,readrandom --num=1000000 --threads=4
old output:
DB path: [/tmp/rocksdbtest-2260/dbbench]
overwrite : 14.108 micros/op 283338 ops/sec; 31.3 MB/s
DB path: [/tmp/rocksdbtest-2260/dbbench]
readrandom : 7.994 micros/op 496788 ops/sec; 55.0 MB/s (1000000 of 1000000 found)
new output:
DB path: [/tmp/rocksdbtest-2260/dbbench]
overwrite : 14.117 micros/op 282862 ops/sec 14.141 seconds 4000000 operations; 31.3 MB/s
DB path: [/tmp/rocksdbtest-2260/dbbench]
readrandom : 8.649 micros/op 458475 ops/sec 8.725 seconds 4000000 operations; 49.8 MB/s (981548 of 1000000 found)
Reviewed By: ajkr
Differential Revision: D36102269
Pulled By: mdcallag
fbshipit-source-id: 5cd8a9e11f5cbe2a46809571afd83335b6b0caa0
Summary:
When a memtable is flushed and the flush would lead to a 0 byte .sst
file being created, RocksDB does not write out the empty .sst file to
disk.
However it still calls Env::DeleteFile() on the file as part of some
cleanup procedure at the end of BuildTable().
Because the to-be-deleted file does not exist, this requires
implementors of the DeleteFile() API to check if the file exists on
their own code, or otherwise risk running into PathNotFound errors when
DeleteFile is invoked on non-existing files.
This PR fixes the situation so that when no .sst file is created,
Deletefile will not be called either.
TableFileCreationStarted() will still be called as before.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9920
Reviewed By: ajkr
Differential Revision: D36107102
Pulled By: riversand963
fbshipit-source-id: 15881ba3fa3192dd448f906280a1cfc7a68a114a
Summary:
If the DB path is specified, the user would expect ldb loads the
options from the path, but it's not:
```
$ ldb list_live_files_metadata --db=`pwd`
```
Default `try_load_options` to true in that case. The user can still
disable that by:
```
$ ldb list_live_files_metadata --db=`pwd` --try_load_options=false
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9937
Test Plan:
`ldb list_live_files_metadata --db=`pwd`` is able to work for
a db generated with different options.num_levels.
Reviewed By: ajkr
Differential Revision: D36106708
Pulled By: jay-zhuang
fbshipit-source-id: 2732fdc027a4d172436b2c9b6a9787b56b10c710
Summary:
This PR solves the problem discussed in https://github.com/facebook/rocksdb/issues/7149. By storing the pointer of InternalKeyComparator as icmp_ in BlockIter, the object size remains the same. And for each call to CompareCurrentKey, there is no need to create Comparator objects. One can use icmp_ directly or use the "user_comparator" from the icmp_.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9611
Test Plan:
with https://github.com/facebook/rocksdb/issues/9903,
```
$ TEST_TMPDIR=/dev/shm python3.6 ../benchmark/tools/compare.py benchmarks ./db_basic_bench ../rocksdb-pr9611/db_basic_bench --benchmark_filter=DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1 --benchmark_repetitions=50
...
Comparing ./db_basic_bench to ../rocksdb-pr9611/db_basic_bench
Benchmark Time CPU Time Old Time New CPU Old CPU New
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
...
DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_pvalue 0.0001 0.0001 U Test, Repetitions: 50 vs 50
DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_mean -0.0483 -0.0483 3924 3734 3924 3734
DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_median -0.0713 -0.0713 3971 3687 3970 3687
DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_stddev -0.0342 -0.0344 225 217 225 217
DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:0/negative_query:0/enable_filter:0/mmap:1/iterations:262144/threads:1_cv +0.0148 +0.0146 0 0 0 0
OVERALL_GEOMEAN -0.0483 -0.0483 0 0 0 0
```
Reviewed By: akankshamahajan15
Differential Revision: D35882037
Pulled By: ajkr
fbshipit-source-id: 9e5337bbad8f1239dff7aa9f6549020d599bfcdf
Summary:
It confused users and use that with options.allow_mmap_reads = true, CPU is high with checksum verification. Add a comment to explain it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9936
Reviewed By: anand1976
Differential Revision: D36106529
fbshipit-source-id: 3d723bd686f96a84c694c8b2d91ad28d9ccfd979
Summary:
Use `unique_ptr<DB>` to make sure the DB object is deleted. Previously it was not, which led to accumulating file descriptors for deleted directories because a `DBImpl::db_dir_` from each test remained alive.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9939
Test Plan: run `lsof -p $(pidof db_basic_bench)` while benchmark runs; verify no FDs for deleted directories.
Reviewed By: jay-zhuang
Differential Revision: D36108761
Pulled By: ajkr
fbshipit-source-id: cfe02646b038a445af7d5db8989eb1f40d658359
Summary:
To support a project to prototype and evaluate algorithmic
enhancments and alternatives to LRUCache, here I have separated out
LRUCache into internal-only "FastLRUCache" and cut it down to
essentials, so that details like secondary cache handling and
priorities do not interfere with prototyping. These can be
re-integrated later as needed, along with refactoring to minimize code
duplication (which would slow down prototyping for now).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9917
Test Plan:
unit tests updated to ensure basic functionality has (likely)
been preserved
Reviewed By: anand1976
Differential Revision: D35995554
Pulled By: pdillinger
fbshipit-source-id: d67b20b7ada3b5d3bfe56d897a73885894a1d9db