Tag:
Branch:
Tree:
846db9d7b1
main
oxigraph-8.1.1
oxigraph-8.3.2
oxigraph-main
${ noResults }
11980 Commits (846db9d7b1d03cdce467a75e534e3e5f9aea2425)
Author | SHA1 | Message | Date |
---|---|---|---|
Changyu Bi | 8827cd0618 |
Support compacting files to different temperatures in FIFO compaction (#11428)
Summary: - Add a new option `CompactionOptionsFIFO::file_temperature_age_thresholds` that allows user to specify age thresholds for compacting files to different temperatures. File temperature can be used to store files in different storage media. The new options allows specifying multiple temperature-age pairs. The option uses struct for a temperature-age pair to use the existing parsing functionality to make the option dynamically settable. - Deprecate the old option `age_for_warm` that was added for a similar purpose. - Compaction score calculation logic is updated to check if a file needs to be compacted to change its temperature. - Some refactoring is done in `FIFOCompactionPicker::PickTemperatureChangeCompaction`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11428 Test Plan: adapted unit tests that were for `age_for_warm` to this new option. Reviewed By: ajkr Differential Revision: D45611412 Pulled By: cbi42 fbshipit-source-id: 2dc384841f61cc04abb9681e31aa2de0f0b06106 |
2 years ago |
Jay Huh | 7531cbda91 |
Clean up rate limiter refill logic (#11425)
Summary: Context: This pull request update is in response to a comment made on https://github.com/facebook/rocksdb/pull/8596#discussion_r680264932. The current implementation of RefillBytesAndGrantRequestsLocked() drains all available_bytes, but the first request after the last wave of requesting/bytes granting is done is not being handled in the same way. This creates a scenario where if a request for a large amount of bytes is enqueued first, but there are not enough available_bytes to fulfill it, the request is put to sleep until the next refill time. Meanwhile, a later request for a smaller number of bytes comes in and is granted immediately. This behavior is not fair as the earlier request was made first. To address this issue, we have made changes to the code to exhaust the remaining available bytes from the request and queue the remaining. With this change, requests are granted in the order they are received, ensuring that earlier requests are not unfairly delayed by later, smaller requests. The specific scenario described above will no longer occur with this change. Also consolidated `granted` and `request_bytes` as part of the change since `granted` is equivalent to `request_bytes == 0` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11425 Test Plan: Added `AvailableByteSizeExhaustTest` Reviewed By: hx235 Differential Revision: D45570711 Pulled By: jaykorean fbshipit-source-id: a7117ed17bf4b8a7ae0f76124cb41902db1a2592 |
2 years ago |
Peter Dillinger | 459969e993 |
Simplify detection of x86 CPU features (#11419)
Summary: **Background** - runtime detection of certain x86 CPU features was added for optimizing CRC32c checksums, where performance is dramatically affected by the availability of certain CPU instructions and code using intrinsics for those instructions. And Java builds with native library try to be broadly compatible but performant. What has changed is that CRC32c is no longer the most efficient cheecksum on contemporary x86_64 hardware, nor the default checksum. XXH3 is generally faster and not as dramatically impacted by the availability of certain CPU instructions. For example, on my Skylake system using db_bench (similar on an older Skylake system without AVX512): PORTABLE=1 empty USE_SSE : xxh3->8 GB/s crc32c->0.8 GB/s (no SSE4.2 nor AVX2 instructions) PORTABLE=1 USE_SSE=1 : xxh3->19 GB/s crc32c->16 GB/s (with SSE4.2 and AVX2) PORTABLE=0 USE_SSE ignored: xxh3->28 GB/s crc32c->16 GB/s (also some AVX512) Testing a ~10 year old system, with SSE4.2 but without AVX2, crc32c is a similar speed to the new systems but xxh3 is only about half that speed, also 8GB/s like the non-AVX2 compile above. Given that xxh3 has specific optimization for AVX2, I think we can infer that that crc32c is only fastest for that ~2008-2013 period when SSE4.2 was included but not AVX2. And given that xxh3 is only about 2x slower on these systems (not like >10x slower for unoptimized crc32c), I don't think we need to invest too much in optimally adapting to these old cases. x86 hardware that doesn't support fast CRC32c is now extremely rare, so requiring a custom build to support such hardware is fine IMHO. **This change** does two related things: * Remove runtime CPU detection for optimizing CRC32c on x86. Maintaining this code is non-zero work, and compiling special code that doesn't work on the configured target instruction set for code generation is always dubious. (On the one hand we have to ensure the CRC32c code uses SSE4.2 but on the other hand we have to ensure nothing else does.) * Detect CPU features in source code, not in build scripts. Although there are some hypothetical advantages to detectiong in build scripts (compiler generality), RocksDB supports at least three build systems: make, cmake, and buck. It's not practical to support feature detection on all three, and we have suffered from missed optimization opportunities by relying on missing or incomplete detection in cmake and buck. We also depend on some components like xxhash that do source code detection anyway. **In more detail:** * `HAVE_SSE42`, `HAVE_AVX2`, and `HAVE_PCLMUL` replaced by standard macros `__SSE4_2__`, `__AVX2__`, and `__PCLMUL__`. * MSVC does not provide high fidelity defines for SSE, PCLMUL, or POPCNT, but we can infer those from `__AVX__` or `__AVX2__` in a compatibility header. In rare cases of false negative or false positive feature detection, a build engineer should be able to set defines to work around the issue. * `__POPCNT__` is another standard define, but we happen to only need it on MSVC, where it is set by that compatibility header, or can be set by the build engineer. * `PORTABLE` can be set to a CPU type, e.g. "haswell", to compile for that CPU type. * `USE_SSE` is deprecated, now equivalent to PORTABLE=haswell, which roughly approximates its old behavior. Notably, this change should enable more builds to use the AVX2-optimized Bloom filter implementation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11419 Test Plan: existing tests, CI Manual performance tests after the change match the before above (none expected with make build). We also see AVX2 optimized Bloom filter code enabled when expected, by injecting a compiler error. (Performance difference is not big on my current CPU.) Reviewed By: ajkr Differential Revision: D45489041 Pulled By: pdillinger fbshipit-source-id: 60ceb0dd2aa3b365c99ed08a8b2a087a9abb6a70 |
2 years ago |
Peter Dillinger | f4a02f2c52 |
Add hash_seed to Caches (#11391)
Summary: See motivation and description in new ShardedCacheOptions::hash_seed option. Updated db_bench so that its seed param is used for the cache hash seed. Made its code more safe to ensure seed is set before use. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11391 Test Plan: unit tests added / updated **Performance** - no discernible difference seen running cache_bench repeatedly before & after. With lru_cache and hyper_clock_cache. Reviewed By: hx235 Differential Revision: D45557797 Pulled By: pdillinger fbshipit-source-id: 40bf4da6d66f9d41a8a0eb8e5cf4246a4aa07934 |
2 years ago |
akankshamahajan | 6ba4717f35 |
Fix build error: variable 'base_level' may be uninitialized (#11435)
Summary: Fix build error: variable 'base_level' may be uninitialized ``` db_impl_compaction_flush.cc:1195:21: error: variable 'base_level' may be uninitialized when used here [-Werror,-Wconditional-uninitialized] level = base_level; ``` ^~~~~~~~~~ Pull Request resolved: https://github.com/facebook/rocksdb/pull/11435 Test Plan: CircleCI jobs Reviewed By: cbi42 Differential Revision: D45708176 Pulled By: akankshamahajan15 fbshipit-source-id: 851b1205b22b63d728495e5735fa91b0ad8e012b |
2 years ago |
Hui Xiao | 8f763bdeab |
Record and use the tail size to prefetch table tail (#11406)
Summary: **Context:** We prefetch the tail part of a SST file (i.e, the blocks after data blocks till the end of the file) during each SST file open in hope to prefetch all the stuff at once ahead of time for later read e.g, footer, meta index, filter/index etc. The existing approach to estimate the tail size to prefetch is through `TailPrefetchStats` heuristics introduced in https://github.com/facebook/rocksdb/pull/4156, which has caused small reads in unlucky case (e.g, small read into the tail buffer during table open in thread 1 under the same BlockBasedTableFactory object can make thread 2's tail prefetching use a small size that it shouldn't) and is hard to debug. Therefore we decide to record the exact tail size and use it directly to prefetch tail of the SST instead of relying heuristics. **Summary:** - Obtain and record in manifest the tail size in `BlockBasedTableBuilder::Finish()` - For backward compatibility, we fall back to TailPrefetchStats and last to simple heuristics that the tail size is a linear portion of the file size - see PR conversation for more. - Make`tail_start_offset` part of the table properties and deduct tail size to record in manifest for external files (e.g, file ingestion, import CF) and db repair (with no access to manifest). Pull Request resolved: https://github.com/facebook/rocksdb/pull/11406 Test Plan: 1. New UT 2. db bench Note: db bench on /tmp/ where direct read is supported is too slow to finish and the default pinning setting in db bench is not helpful to profile # sst read of Get. Therefore I hacked the following to obtain the following comparison. ``` diff --git a/table/block_based/block_based_table_reader.cc b/table/block_based/block_based_table_reader.cc index bd5669f0f..791484c1f 100644 --- a/table/block_based/block_based_table_reader.cc +++ b/table/block_based/block_based_table_reader.cc @@ -838,7 +838,7 @@ Status BlockBasedTable::PrefetchTail( &tail_prefetch_size); // Try file system prefetch - if (!file->use_direct_io() && !force_direct_prefetch) { + if (false && !file->use_direct_io() && !force_direct_prefetch) { if (!file->Prefetch(prefetch_off, prefetch_len, ro.rate_limiter_priority) .IsNotSupported()) { prefetch_buffer->reset(new FilePrefetchBuffer( diff --git a/tools/db_bench_tool.cc b/tools/db_bench_tool.cc index ea40f5fa0..39a0ac385 100644 --- a/tools/db_bench_tool.cc +++ b/tools/db_bench_tool.cc @@ -4191,6 +4191,8 @@ class Benchmark { std::shared_ptr<TableFactory>(NewCuckooTableFactory(table_options)); } else { BlockBasedTableOptions block_based_options; + block_based_options.metadata_cache_options.partition_pinning = + PinningTier::kAll; block_based_options.checksum = static_cast<ChecksumType>(FLAGS_checksum_type); if (FLAGS_use_hash_search) { ``` Create DB ``` ./db_bench --bloom_bits=3 --use_existing_db=1 --seed=1682546046158958 --partition_index_and_filters=1 --statistics=1 -db=/dev/shm/testdb/ -benchmarks=readrandom -key_size=3200 -value_size=512 -num=1000000 -write_buffer_size=6550000 -disable_auto_compactions=false -target_file_size_base=6550000 -compression_type=none ``` ReadRandom ``` ./db_bench --bloom_bits=3 --use_existing_db=1 --seed=1682546046158958 --partition_index_and_filters=1 --statistics=1 -db=/dev/shm/testdb/ -benchmarks=readrandom -key_size=3200 -value_size=512 -num=1000000 -write_buffer_size=6550000 -disable_auto_compactions=false -target_file_size_base=6550000 -compression_type=none ``` (a) Existing (Use TailPrefetchStats for tail size + use seperate prefetch buffer in PartitionedFilter/IndexReader::CacheDependencies()) ``` rocksdb.table.open.prefetch.tail.hit COUNT : 3395 rocksdb.sst.read.micros P50 : 5.655570 P95 : 9.931396 P99 : 14.845454 P100 : 585.000000 COUNT : 999905 SUM : 6590614 ``` (b) This PR (Record tail size + use the same tail buffer in PartitionedFilter/IndexReader::CacheDependencies()) ``` rocksdb.table.open.prefetch.tail.hit COUNT : 14257 rocksdb.sst.read.micros P50 : 5.173347 P95 : 9.015017 P99 : 12.912610 P100 : 228.000000 COUNT : 998547 SUM : 5976540 ``` As we can see, we increase the prefetch tail hit count and decrease SST read count with this PR 3. Test backward compatibility by stepping through reading with post-PR code on a db generated pre-PR. Reviewed By: pdillinger Differential Revision: D45413346 Pulled By: hx235 fbshipit-source-id: 7d5e36a60a72477218f79905168d688452a4c064 |
2 years ago |
Peter Dillinger | e1d1c50317 |
Organize + modernize ReadOptions (#11430)
Summary: Roughly group ReadOptions into those that apply generally and those that only apply to range scans. Also use field assignment idiom to simplify specification of default values. Also some rearranging to reduce unused padding. sizeof(ReadOptions) was 144 on my system, now 136. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11430 Test Plan: existing tests, no functional change intended Reviewed By: hx235 Differential Revision: D45626508 Pulled By: pdillinger fbshipit-source-id: 227d4158c5123405324f273ded2eb9d8bce86364 |
2 years ago |
Akanksha koul | 736b3c4909 |
Added encryption plugin based on Intel open-source ipp-crypto library (#11429)
Summary: This PR adds a plugin that supports AES-CTR encryption for RocksDB based on highly performant intel open-source cryptographic library IPP-Crypto. Details: - supports AES-128, AES-192, and AES-256. - uses the CTR mode of operation. - based on the Intel® crypto library -- https://github.com/intel/ipp-crypto. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11429 Reviewed By: cbi42 Differential Revision: D45622342 Pulled By: ajkr fbshipit-source-id: 2463fa2b8ae625fdd7d83768e274c74e3f2a0f46 |
2 years ago |
Peter Dillinger | a5909f8864 |
Clarify io_activity (#11427)
Summary: Document ReadOptions::io_activity as internal-use-only. And to keep kUnknown as last (and why). Pull Request resolved: https://github.com/facebook/rocksdb/pull/11427 Test Plan: comments only Reviewed By: hx235 Differential Revision: D45576986 Pulled By: pdillinger fbshipit-source-id: aae15aa22ea91370c2b7366154e45d4b91a79ad2 |
2 years ago |
Changyu Bi | a11f1e12ca |
Fix flaky test `DBTestUniversalManualCompactionOutputPathId.ManualCompactionOutputPathId` (#11412)
Summary: the test is flaky when compiled with `make -j56 COERCE_CONTEXT_SWITCH=1 ./db_universal_compaction_test`. The cause is that a manual compaction `CompactRange()` can finish and return before obsolete files are deleted. One reason for this is that a manual compaction waits until `manual.done` is set here |
2 years ago |
nccx | c81d58016b |
Add more users (#11369)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11369 Reviewed By: akankshamahajan15 Differential Revision: D44891388 Pulled By: ajkr fbshipit-source-id: cc426bc1359f0721a9002652e99d1317ff7b6383 |
2 years ago |
clundro | 50b33ebb1b |
remove redundant move (#11418)
Summary: when I use g++-13 to exec the `make all` command, the output throws the warnings. ``` db/compaction/compaction_job_test.cc: In member function ‘void rocksdb::CompactionJobTestBase::AddMockFile(const rocksdb::mock::KVVector&, int)’: db/compaction/compaction_job_test.cc:376:57: error: redundant move in initialization [-Werror=redundant-move] 376 | env_, GenerateFileName(file_number), std::move(contents))); | ~~~~~~~~~^~~~~~~~~~ db/compaction/compaction_job_test.cc:375:7: note: in expansion of macro ‘EXPECT_OK’ 375 | EXPECT_OK(mock_table_factory_->CreateMockTable( | ^~~~~~~~~ db/compaction/compaction_job_test.cc:376:57: note: remove ‘std::move’ call 376 | env_, GenerateFileName(file_number), std::move(contents))); | ~~~~~~~~~^~~~~~~~~~ db/compaction/compaction_job_test.cc:375:7: note: in expansion of macro ‘EXPECT_OK’ 375 | EXPECT_OK(mock_table_factory_->CreateMockTable( | ^~~~~~~~~ cc1plus: all warnings being treated as errors make: *** [Makefile:2507: db/compaction/compaction_job_test.o] Error 1 ``` and I also add some `(void)unused_variable` statements because of the cmake argument `-Wunused-but-set-variable -Wunused-but-set-variable` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11418 Reviewed By: akankshamahajan15 Differential Revision: D45528223 Pulled By: ajkr fbshipit-source-id: fee1a77c30039a56b481de953f0a834cc788abbc |
2 years ago |
leipeng | a475e9f746 |
DBIter::FindValueForCurrentKey: remove unused Status s (#11394)
Summary: This PR remove a historical useless code Pull Request resolved: https://github.com/facebook/rocksdb/pull/11394 Reviewed By: ajkr Differential Revision: D45506226 Pulled By: akankshamahajan15 fbshipit-source-id: 32c98627100c9ad131bf65c4a1fe97ab61502daf |
2 years ago |
anand76 | 03a892a9fb |
Delete empty WAL files on reopen (#11409)
Summary: When a DB is opened, RocksDB creates an empty WAL file. When the DB is reopened and the WAL is empty, the min log number to keep is not advanced until a memtable flush happens. If a process crashes soon after reopening the DB, its likely that no memtable flush would have happened, which means the empty WAL file is not deleted. In a crash loop scenario, this leads to empty WAL files accumulating. Fix this by ensuring the min log number is advanced if the WAL is empty. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11409 Test Plan: Add a unit test Reviewed By: ajkr Differential Revision: D45281685 Pulled By: anand1976 fbshipit-source-id: 0225877c613e65ffb30972a0051db2830105423e |
2 years ago |
Peter Dillinger | 41a7fbf758 |
Avoid long parameter lists configuring Caches (#11386)
Summary: For better clarity, encouraging more options explicitly specified using fields rather than positionally via constructor parameter lists. Simplifies code maintenance as new fields are added. Deprecate some cases of the confusing pattern of NewWhatever() functions returning shared_ptr. Net reduction of about 70 source code lines (including comments). Pull Request resolved: https://github.com/facebook/rocksdb/pull/11386 Test Plan: existing tests Reviewed By: ajkr Differential Revision: D45059075 Pulled By: pdillinger fbshipit-source-id: d53fa09b268024f9c55254bb973b6c69feebf41a |
2 years ago |
Peter Dillinger | e0e318f370 |
Optionally support lldb for stack traces and debugger attach (#11413)
Summary: lldb is more supported for Meta infrastructure than gdb, so adding support for it in generating stack traces and attaching debugger on crash. For now you need to set ROCKSDB_LLDB_STACK=1 for stack traces or ROCKSDB_DEBUG=lldb for interactive debugging. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11413 Test Plan: some manual testing (no production code changes) Reviewed By: ajkr Differential Revision: D45360952 Pulled By: pdillinger fbshipit-source-id: 862bc8800eb03e3bdc1be8c0702960a19db45be8 |
2 years ago |
Peter Dillinger | 76a40286b0 |
Fix duplicate symbols in linking with buck2 (#11421)
Summary: Seen in Meta-internal builds that manually depend on rocksdb_whole_archive_lib and want to automatically depend on rocksdb_lib. This change puts rocksdb_lib in the ancestry of rocksdb_whole_archive_lib, and buck2 appears to recognize that even if rocksdb_lib is listed as a separate dependency downstream. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11421 Test Plan: Run failing internal build with the change. See T147085939 Reviewed By: akankshamahajan15 Differential Revision: D45446689 Pulled By: pdillinger fbshipit-source-id: e8a891fa020dfcf0564b35d30511d70347650fa8 |
2 years ago |
Andrew Kryczka | 925d8252e5 |
Shard JemallocNodumpAllocator (#11400)
Summary:
RocksDB's jemalloc no-dump allocator (`NewJemallocNodumpAllocator()`) was using a single manual arena. This arena's lock contention could be very high when thread caching is disabled for RocksDB blocks (e.g., when using `MALLOC_CONF='tcache_max:4096'` and `rocksdb_block_size=16384`).
This PR changes the jemalloc no-dump allocator to use a configurable number of manual arenas. That number is required to be a power of two so we can avoid division. The allocator shards allocation requests randomly across those manual arenas.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11400
Test Plan:
- mysqld setup
- Branch: fb-mysql-8.0.28 (
|
2 years ago |
Levi Tamasi | d3ed796855 |
Deflake some old BlobDB test cases (#11417)
Summary: The old `StackableDB` based BlobDB implementation relies on a DB listener to track the total size of the SST files in the database and to trigger FIFO eviction. Some test cases in `BlobDBTest` assume that the listener is notified by the time `DB::Flush` returns, which is not guaranteed (side note: `TEST_WaitForFlushMemTable` would not guarantee this either). The patch fixes these tests by using `SyncPoint`s to make sure the listener is actually called before verifying the FIFO behavior. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11417 Test Plan: ``` make -j56 COERCE_CONTEXT_SWITCH=1 blob_db_test ./blob_db_test --gtest_filter=BlobDBTest.FIFOEviction_TriggerOnSSTSizeChange ./blob_db_test --gtest_filter=BlobDBTest.FilterForFIFOEviction ./blob_db_test --gtest_filter=BlobDBTest.FIFOEviction_NoEnoughBlobFilesToEvict ``` Reviewed By: ajkr Differential Revision: D45407135 Pulled By: ltamasi fbshipit-source-id: fcd63d76937d2c975f569a6635ce8730772a3d75 |
2 years ago |
Changyu Bi | 62fc15f009 |
Block per key-value checksum (#11287)
Summary: add option `block_protection_bytes_per_key` and implementation for block per key-value checksum. The main changes are 1. checksum construction and verification in block.cc/h 2. pass the option `block_protection_bytes_per_key` around (mainly for methods defined in table_cache.h) 3. unit tests/crash test updates Tests: * Added unit tests * Crash test: `python3 tools/db_crashtest.py blackbox --simple --block_protection_bytes_per_key=1 --write_buffer_size=1048576` Follow up (maybe as a separate PR): make sure corruption status returned from BlockIters are correctly handled. Performance: Turning on block per KV protection has a non-trivial negative impact on read performance and costs additional memory. For memory, each block includes additional 24 bytes for checksum-related states beside checksum itself. For CPU, I set up a DB of size ~1.2GB with 5M keys (32 bytes key and 200 bytes value) which compacts to ~5 SST files (target file size 256 MB) in L6 without compression. I tested readrandom performance with various block cache size (to mimic various cache hit rates): ``` SETUP make OPTIMIZE_LEVEL="-O3" USE_LTO=1 DEBUG_LEVEL=0 -j32 db_bench ./db_bench -benchmarks=fillseq,compact0,waitforcompaction,compact,waitforcompaction -write_buffer_size=33554432 -level_compaction_dynamic_level_bytes=true -max_background_jobs=8 -target_file_size_base=268435456 --num=5000000 --key_size=32 --value_size=200 --compression_type=none BENCHMARK ./db_bench --use_existing_db -benchmarks=readtocache,readrandom[-X10] --num=5000000 --key_size=32 --disable_auto_compactions --reads=1000000 --block_protection_bytes_per_key=[0|1] --cache_size=$CACHESIZE The readrandom ops/sec looks like the following: Block cache size: 2GB 1.2GB * 0.9 1.2GB * 0.8 1.2GB * 0.5 8MB Main 240805 223604 198176 161653 139040 PR prot_bytes=0 238691 226693 200127 161082 141153 PR prot_bytes=1 214983 193199 178532 137013 108211 prot_bytes=1 vs -10% -15% -10.8% -15% -23% prot_bytes=0 ``` The benchmark has a lot of variance, but there was a 5% to 25% regression in this benchmark with different cache hit rates. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11287 Reviewed By: ajkr Differential Revision: D43970708 Pulled By: cbi42 fbshipit-source-id: ef98d898b71779846fa74212b9ec9e08b7183940 |
2 years ago |
leipeng | 40d69b59ad |
DBImpl::MultiGet: delete unused var `superversions_to_delete` (#11395)
Summary: In db_impl.cc DBImpl::MultiGet: delete unused var `superversions_to_delete` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11395 Reviewed By: ajkr Differential Revision: D45240896 Pulled By: cbi42 fbshipit-source-id: 0fff99b0d794b6f6d4aadee6036bddd6cb19eb31 |
2 years ago |
Hui Xiao | 3622cfa34a |
Add back io_uring stress test hack with DbStressFSWrapper for FS not supporting read async (#11404)
Summary: **Context/Summary:** To better utilize `DbStressFSWrapper` for some assertion, https://github.com/facebook/rocksdb/pull/11288 removed an io_uring stress test hack for POSIX FS not supporting read async added in https://github.com/facebook/rocksdb/pull/11242 = It was removed based on the assumption that a later PR https://github.com/facebook/rocksdb/pull/11296 is sufficient to serve as an alternative workaround. But recent stress tests has shown the opposite, mostly because 11296 approach might be subjected to incompleteness when more `ReadOptions` are passed down as what https://github.com/facebook/rocksdb/pull/11288 has done. As a short-term solution to both work around POSIX FS constraint above and utilize `DbStressFSWrapper` for 11288 assertion, I proposed this PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11404 Test Plan: - Stress test ensures 11288's assertion is still effective in `DbStressFSWrapper` ``` ./db_stress --acquire_snapshot_one_in=10000 --adaptive_readahead=0 --allow_data_in_errors=True --async_io=1 --avoid_flush_during_recovery=1 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=100000 --batch_protection_bytes_per_key=8 --block_size=16384 --bloom_bits=16 --bottommost_compression_type=disable --bytes_per_sync=0 --cache_index_and_filter_blocks=0 --cache_size=8388608 --cache_type=hyper_clock_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=1 --charge_table_reader=0 --checkpoint_one_in=1000000 --checksum_type=kxxHash64 --clear_column_family_one_in=0 --compact_files_one_in=1000000 --compact_range_one_in=1000000 --compaction_pri=1 --compaction_ttl=0 --compression_max_dict_buffer_bytes=32767 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=lz4 --compression_use_zstd_dict_trainer=1 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --data_block_index_type=0 --db=$db --db_write_buffer_size=0 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=1 --disable_wal=0 --enable_compaction_filter=0 --enable_pipelined_write=1 --enable_thread_tracking=0 --expected_values_dir=$exp --fail_if_options_file_error=1 --fifo_allow_compaction=0 --file_checksum_impl=crc32c --flush_one_in=1000000 --format_version=4 --get_current_wal_file_one_in=0 --get_live_files_one_in=1000000 --get_property_one_in=1000000 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=4 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=16384 --iterpercent=10 --key_len_percent_dist=1,30,69 --kill_random_test=888887 --level_compaction_dynamic_level_bytes=0 --lock_wal_one_in=1000000 --log2_keys_per_lock=10 --long_running_snapshots=0 --manual_wal_flush_one_in=1000 --mark_for_compaction_one_file_in=10 --max_auto_readahead_size=16384 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=25000000 --max_key_len=3 --max_manifest_file_size=1073741824 --max_write_batch_group_size_bytes=1048576 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=8388608 --memtable_prefix_bloom_size_ratio=0.1 --memtable_protection_bytes_per_key=4 --memtable_whole_key_filtering=0 --memtablerep=skip_list --min_write_buffer_number_to_merge=2 --mmap_read=0 --mock_direct_io=True --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --open_files=100 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=20000000 --optimize_filters_for_memory=0 --paranoid_file_checks=0 --partition_filters=0 --partition_pinning=0 --pause_background_one_in=1000000 --periodic_compaction_seconds=0 --prefix_size=1 --prefixpercent=5 --prepopulate_block_cache=1 --preserve_internal_time_seconds=36000 --progress_reports=0 --read_fault_one_in=32 --readahead_size=16384 --readpercent=45 --recycle_log_file_num=0 --reopen=20 --ribbon_starting_level=1 --secondary_cache_fault_one_in=32 --secondary_cache_uri=compressed_secondary_cache://capacity=8388608 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --stats_dump_period_sec=10 --subcompactions=1 --sync=0 --sync_fault_injection=1 --target_file_size_base=2097152 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=2 --unpartitioned_pinning=3 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=1 --use_full_merge_v1=0 --use_get_entity=0 --use_merge=1 --use_multi_get_entity=0 --use_multiget=1 --use_put_entity_one_in=0 --user_timestamp_size=0 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=1000000 --verify_db_one_in=100000 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=524288 --wal_compression=none --write_buffer_size=4194304 --write_dbid_to_manifest=1 --writepercent=35 ``` - Monitor future stress test to show `MultiGet error: Not implemented: ReadAsync` is gone Reviewed By: ltamasi Differential Revision: D45242280 Pulled By: hx235 fbshipit-source-id: 9823e3fbd4e9672efdd31478a2f2cbd68a98bdf5 |
2 years ago |
Peter Dillinger | 46dbcfd799 |
Start version 8.3 (#11405)
Summary: Update and clean up history. Update version number. Add to compatibility test. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11405 Reviewed By: ltamasi Differential Revision: D45242779 Pulled By: pdillinger fbshipit-source-id: 860bd047584d051472ba9ccefae7ebc3f37b1d8f |
2 years ago |
Peter Dillinger | a2c1f57358 |
Fix compression tests^2 (#11403)
Summary: This time a particular version of bzip2 is under-compressing vs. expectation in BlockBasedTableTest.CompressionRatioThreshold. We'll exempt that algorithm like I did for DBStatisticsTest.CompressionStatsTest. https://app.circleci.com/pipelines/github/facebook/rocksdb/26869/workflows/a46246db-73c7-4946-af82-10a78a7df6af/jobs/596124 Pull Request resolved: https://github.com/facebook/rocksdb/pull/11403 Test Plan: CI Reviewed By: ltamasi Differential Revision: D45233441 Pulled By: pdillinger fbshipit-source-id: 506c8dfe5e0397c78193359df6288397bf0667c9 |
2 years ago |
Peter Dillinger | fb63d9b4ee |
Fix compression tests when snappy not available (#11396)
Summary: Tweak some bounds and things, and reduce risk of surprise results by running on all supported compressions (mostly). Also improves the precise compressibility of CompressibleString by using RandomBinaryString. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11396 Test Plan: updated tests Reviewed By: ltamasi Differential Revision: D45211938 Pulled By: pdillinger fbshipit-source-id: 9dc1dd8574a60a9364efe18558be66d31a35598b |
2 years ago |
Peter Dillinger | d79be3dca2 |
Changes and enhancements to compression stats, thresholds (#11388)
Summary: ## Option API updates * Add new CompressionOptions::max_compressed_bytes_per_kb, which corresponds to 1024.0 / min allowable compression ratio. This avoids the hard-coded minimum ratio of 8/7. * Remove unnecessary constructor for CompressionOptions. * Document undocumented CompressionOptions. Use idiom for default values shown clearly in one place (not precariously repeated). ## Stat API updates * Deprecate the BYTES_COMPRESSED, BYTES_DECOMPRESSED histograms. Histograms incur substantial extra space & time costs compared to tickers, and the distribution of uncompressed data block sizes tends to be uninteresting. If we're interested in that distribution, I don't see why it should be limited to blocks stored as compressed. * Deprecate the NUMBER_BLOCK_NOT_COMPRESSED ticker, because the name is very confusing. * New or existing tickers relevant to compression: * BYTES_COMPRESSED_FROM * BYTES_COMPRESSED_TO * BYTES_COMPRESSION_BYPASSED * BYTES_COMPRESSION_REJECTED * COMPACT_WRITE_BYTES + FLUSH_WRITE_BYTES (both existing) * NUMBER_BLOCK_COMPRESSED (existing) * NUMBER_BLOCK_COMPRESSION_BYPASSED * NUMBER_BLOCK_COMPRESSION_REJECTED * BYTES_DECOMPRESSED_FROM * BYTES_DECOMPRESSED_TO We can compute a number of things with these stats: * "Successful" compression ratio: BYTES_COMPRESSED_FROM / BYTES_COMPRESSED_TO * Compression ratio of data on which compression was attempted: (BYTES_COMPRESSED_FROM + BYTES_COMPRESSION_REJECTED) / (BYTES_COMPRESSED_TO + BYTES_COMPRESSION_REJECTED) * Compression ratio of data that could be eligible for compression: (BYTES_COMPRESSED_FROM + X) / (BYTES_COMPRESSED_TO + X) where X = BYTES_COMPRESSION_REJECTED + NUMBER_BLOCK_COMPRESSION_REJECTED * Overall SST compression ratio (compression disabled vs. actual): (Y - BYTES_COMPRESSED_TO + BYTES_COMPRESSED_FROM) / Y where Y = COMPACT_WRITE_BYTES + FLUSH_WRITE_BYTES Keeping _REJECTED separate from _BYPASSED helps us to understand "wasted" CPU time in compression. ## BlockBasedTableBuilder Various small refactorings, optimizations, and name clean-ups. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11388 Test Plan: unit tests added * `options_settable_test.cc`: use non-deprecated idiom for configuring CompressionOptions from string. The old idiom is tested elsewhere and does not need to be updated to support the new field. Reviewed By: ajkr Differential Revision: D45128202 Pulled By: pdillinger fbshipit-source-id: 5a652bf5c022b7ec340cf79018cccf0686962803 |
2 years ago |
Changyu Bi | adc9001f20 |
Improve error message from `SanityCheckCFOptions()` for merge_operator (#11393)
Summary: This happens when the persisted merge operator not a RocksDB built-in one. This PR improves this error message to include the actual persisted merge operator name. when there is a merge_operator mismatch in `SanityCheckCFOptions()`, for example, going from merge operator "CustomMergeOp" to nullptr, an error message like the following is returned: "failed the verification on ColumnFamilyOptions::merge_operator--- The specified one is nullptr while the **persisted one is nullptr**." This happens when the persisted merge operator not a RocksDB built-in one. This PR improves this error message to include the actual persisted merge operator name. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11393 Test Plan: add unit test to check error message when going from merge op -> nullptr and going from merge op1 to merge op 2. Reviewed By: ajkr Differential Revision: D45190131 Pulled By: cbi42 fbshipit-source-id: 67712c2fec29c654c15166d1be985e710e6081e5 |
2 years ago |
Hui Xiao | 151242ce46 |
Group rocksdb.sst.read.micros stat by IOActivity flush and compaction (#11288)
Summary: **Context:** The existing stat rocksdb.sst.read.micros does not reflect each of compaction and flush cases but aggregate them, which is not so helpful for us to understand IO read behavior of each of them. **Summary** - Update `StopWatch` and `RandomAccessFileReader` to record `rocksdb.sst.read.micros` and `rocksdb.file.{flush/compaction}.read.micros` - Fixed the default histogram in `RandomAccessFileReader` - New field `ReadOptions/IOOptions::io_activity`; Pass `ReadOptions` through paths under db open, flush and compaction to where we can prepare `IOOptions` and pass it to `RandomAccessFileReader` - Use `thread_status_util` for assertion in `DbStressFSWrapper` for continuous testing on we are passing correct `io_activity` under db open, flush and compaction Pull Request resolved: https://github.com/facebook/rocksdb/pull/11288 Test Plan: - **Stress test** - **Db bench 1: rocksdb.sst.read.micros COUNT ≈ sum of rocksdb.file.read.flush.micros's and rocksdb.file.read.compaction.micros's.** (without blob) - May not be exactly the same due to `HistogramStat::Add` only guarantees atomic not accuracy across threads. ``` ./db_bench -db=/dev/shm/testdb/ -statistics=true -benchmarks="fillseq" -key_size=32 -value_size=512 -num=50000 -write_buffer_size=655 -target_file_size_base=655 -disable_auto_compactions=false -compression_type=none -bloom_bits=3 (-use_plain_table=1 -prefix_size=10) ``` ``` // BlockBasedTable rocksdb.sst.read.micros P50 : 2.009374 P95 : 4.968548 P99 : 8.110362 P100 : 43.000000 COUNT : 40456 SUM : 114805 rocksdb.file.read.flush.micros P50 : 1.871841 P95 : 3.872407 P99 : 5.540541 P100 : 43.000000 COUNT : 2250 SUM : 6116 rocksdb.file.read.compaction.micros P50 : 2.023109 P95 : 5.029149 P99 : 8.196910 P100 : 26.000000 COUNT : 38206 SUM : 108689 // PlainTable Does not apply ``` - **Db bench 2: performance** **Read** SETUP: db with 900 files ``` ./db_bench -db=/dev/shm/testdb/ -benchmarks="fillseq" -key_size=32 -value_size=512 -num=50000 -write_buffer_size=655 -disable_auto_compactions=true -target_file_size_base=655 -compression_type=none ```run till convergence ``` ./db_bench -seed=1678564177044286 -use_existing_db=true -db=/dev/shm/testdb -benchmarks=readrandom[-X60] -statistics=true -num=1000000 -disable_auto_compactions=true -compression_type=none -bloom_bits=3 ``` Pre-change `readrandom [AVG 60 runs] : 21568 (± 248) ops/sec` Post-change (no regression, -0.3%) `readrandom [AVG 60 runs] : 21486 (± 236) ops/sec` **Compaction/Flush**run till convergence ``` ./db_bench -db=/dev/shm/testdb2/ -seed=1678564177044286 -benchmarks="fillseq[-X60]" -key_size=32 -value_size=512 -num=50000 -write_buffer_size=655 -disable_auto_compactions=false -target_file_size_base=655 -compression_type=none rocksdb.sst.read.micros COUNT : 33820 rocksdb.sst.read.flush.micros COUNT : 1800 rocksdb.sst.read.compaction.micros COUNT : 32020 ``` Pre-change `fillseq [AVG 46 runs] : 1391 (± 214) ops/sec; 0.7 (± 0.1) MB/sec` Post-change (no regression, ~-0.4%) `fillseq [AVG 46 runs] : 1385 (± 216) ops/sec; 0.7 (± 0.1) MB/sec` Reviewed By: ajkr Differential Revision: D44007011 Pulled By: hx235 fbshipit-source-id: a54c89e4846dfc9a135389edf3f3eedfea257132 |
2 years ago |
Andrew Kryczka | 0a774a102f |
Clarify `SstFileWriter::DeleteRange()` ordering requirements (#11390)
Summary: As titled Pull Request resolved: https://github.com/facebook/rocksdb/pull/11390 Reviewed By: cbi42 Differential Revision: D45148830 Pulled By: ajkr fbshipit-source-id: 9a8dfd040514bae3d8ed9e97a79cae7683f2749a |
2 years ago |
Andrew Kryczka | 6cac4c79d4 |
Fix race condition in db_stress checkpoint cleanup (#11389)
Summary: The old cleanup code had a race condition: 1. Test thread: DestroyDB() marked a file as trash 2. DeleteScheduler thread: Got the file's size and decided to delete it in chunks 3. Test thread: DestroyDir() deleted that trash file 4. DeleteScheduler thread: Began deleting in chunks starting by calling ReopenWritableFile(). Unfortunately this recreates the deleted trash file 5. Test thread: DestroyDir() fails to remove the parent directory because it contains the file created in 4. 6. Test thread: Checkpoint::Create() fails due to the directory already existing It could be repro'd with the following patch/command. Patch: ``` diff --git a/file/delete_scheduler.cc b/file/delete_scheduler.cc index 8a2d1615d..337d24a60 100644 --- a/file/delete_scheduler.cc +++ b/file/delete_scheduler.cc @@ -317,6 +317,12 @@ Status DeleteScheduler::DeleteTrashFile(const std::string& path_in_trash, &num_hard_links, nullptr); if (my_status.ok()) { if (num_hard_links == 1) { + // Give some time for DestroyDir() to delete file entries. Then, the + // below `ReopenWritableFile()` will recreate files, preventing the + // parent directory from being deleted. + if (rand() % 2 == 0) { + usleep(1000); + } std::unique_ptr<FSWritableFile> wf; my_status = fs_->ReopenWritableFile(path_in_trash, FileOptions(), &wf, nullptr); diff --git a/file/file_util.cc b/file/file_util.cc index 43608fcdc..2cee1ad8e 100644 --- a/file/file_util.cc +++ b/file/file_util.cc @@ -263,6 +263,13 @@ Status DestroyDir(Env* env, const std::string& dir) { } } + // Give some time for the DeleteScheduler thread's ReopenWritableFile() to + // recreate deleted files + if (dir.find("checkpoint") != std::string::npos) { + fprintf(stderr, "waiting to destroy %s\n", dir.c_str()); + usleep(10000); + } + if (s.ok()) { s = env->DeleteDir(dir); // DeleteDir might or might not report NotFound ``` Command: ``` TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py blackbox --simple --write_buffer_size=131072 --target_file_size_base=131072 --max_bytes_for_level_base=524288 --checkpoint_one_in=100 --clear_column_family_one_in=0 --max_key=1000 --value_size_mult=33 --sst_file_manager_bytes_per_truncate=4096 --sst_file_manager_bytes_per_sec=1048576 --interval=3 --compression_type=none --sync_fault_injection=1 ``` Obviously we don't want to use scheduled deletion here as we need the checkpoint directory deleted immediately. I suspect the DestroyDir() was an attempt to fixup incomplete DestroyDB()s. Now that we expect DestroyDB() to be complete I removed that code. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11389 Reviewed By: hx235 Differential Revision: D45137142 Pulled By: ajkr fbshipit-source-id: 2af743d342c77cc414fd25fc4c9d7c9c6079ad24 |
2 years ago |
Changyu Bi | 43e9a60bb2 |
Always allow L0->L1 trivial move during manual compaction (#11375)
Summary: during manual compaction (CompactRange()), L0->L1 trivial move is disabled when only L0 overlaps with compacting key range (introduced in https://github.com/facebook/rocksdb/issues/7368 to enforce kForce* contract). This can cause large memory usage due to compaction readahead when number of L0 files is large. This PR allows L0->L1 trivial move in this case, and will do a L1 -> L1 intra-level compaction when needed (`bottommost_level_compaction` is kForce*). In brief, consider a DB with only L0 file, and user calls CompactRange(kForce, nullptr, nullptr), - before this PR, RocksDB does a L0 -> L1 compaction (disallow trivial move), - after this PR, RocksDB does a L0 -> L1 compaction (allow trivial move), and a L1 -> L1 compaction. Users can use kForceOptimized to avoid this extra L1->L1 compaction overhead when L0s are overlapping and cannot be trivial moved. This PR also fixed a bug (see previous discussion in https://github.com/facebook/rocksdb/issues/11041) where `final_output_level` of a manual compaction can be miscalculated when `level_compaction_dynamic_level_bytes=true`. This bug could cause incorrect level being moved when CompactRangeOptions::change_level is specified. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11375 Test Plan: - Added new unit tests to test that L0 -> L1 compaction allows trivial move and L1 -> L1 compaction is done when needed. Reviewed By: ajkr Differential Revision: D44943518 Pulled By: cbi42 fbshipit-source-id: e9fb770d17b163c18a623e1d1bd6b81159192708 |
2 years ago |
Andrew Kryczka | bd80433c73 |
Set -source 8 in CMAKE_JAVA_COMPILE_FLAGS (#11385)
Summary: build-windows-vs2022 jobs (e.g., https://app.circleci.com/pipelines/github/facebook/rocksdb/26641/workflows/7d1c58b8-7dd6-4dd6-a222-ecdfb0892c3b/jobs/593583) began failing with: ``` (CustomBuild target) -> CUSTOMBUILD : error : Source option 7 is no longer supported. Use 8 or later. [C:\Users\circleci.PACKER-64370BA5\project\build\java\rocksdbjni_classes.vcxproj] ``` So, this PR tries setting `-source 8` instead. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11385 Reviewed By: ltamasi Differential Revision: D45058172 Pulled By: ajkr fbshipit-source-id: b0daa1ea6f576c8417add40bd6c92710d329c44d |
2 years ago |
Peter Dillinger | 9b698cda51 |
Update GeneralTableTest::ApproximateOffsetOfCompressed values (#11384)
Summary: Because of this failure with snappy 1.1.8, ROCKSDB_NO_FBCODE=1 ``` Value 3531 is not in range [2000, 3525] table/table_test.cc:4231: Failure ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11384 Test Plan: run updated test in failing configuration Reviewed By: ajkr Differential Revision: D45057161 Pulled By: pdillinger fbshipit-source-id: 397054f08033315e2e2bd9410f1fa32ddbf3b9c8 |
2 years ago |
Andrew Kryczka | f3818948e8 |
Deflake DBWriteTest.LockWALInEffect (#11382)
Summary: This test exhibited the following flaky failure: ``` db/db_write_test.cc:653: Failure db_->Resume() Corruption: Not active ``` I was able to repro it by applying the following patch to coerce a specific race condition: ``` diff --git a/db/db_write_test.cc b/db/db_write_test.cc index d82c57376..775ba3cde 100644 --- a/db/db_write_test.cc +++ b/db/db_write_test.cc @@ -636,6 +636,10 @@ TEST_P(DBWriteTest, LockWALInEffect) { ASSERT_TRUE(dbfull()->WALBufferIsEmpty()); ASSERT_OK(db_->UnlockWAL()); + // Test thread: sleep interval: [0, 3) + // In this interval, the file system is active + sleep(3); + // Fail the WAL flush if applicable fault_fs->SetFilesystemActive(false); Status s = Put("key2", "value"); @@ -649,6 +653,11 @@ TEST_P(DBWriteTest, LockWALInEffect) { ASSERT_OK(db_->LockWAL()); ASSERT_OK(db_->UnlockWAL()); } + + // Test thread: sleep interval: [3, 6) + // In this interval, the file system is inactive + sleep(3); + fault_fs->SetFilesystemActive(true); ASSERT_OK(db_->Resume()); // Writes should work again diff --git a/db/flush_job.cc b/db/flush_job.cc index 8193f594f..602ee2c9f 100644 --- a/db/flush_job.cc +++ b/db/flush_job.cc @@ -979,6 +979,10 @@ Status FlushJob::WriteLevel0Table() { DirFsyncOptions(DirFsyncOptions::FsyncReason::kNewFileSynced)); } TEST_SYNC_POINT_CALLBACK("FlushJob::WriteLevel0Table", &mems_); + // Flush thread: sleep interval: [0, 4) + // Upon awakening, the file system will be inactive. Then the MANIFEST + // update will fail. + sleep(4); db_mutex_->Lock(); } base_->Unref(); ``` The fix for this scenario is explained in the code change. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11382 Reviewed By: cbi42 Differential Revision: D45027632 Pulled By: ajkr fbshipit-source-id: 6bfa35a5781c0c080fb74e13f2b2c9f871f7effb |
2 years ago |
Andrew Kryczka | b8555ba470 |
Deflake DBBloomFilterTest.OptimizeFiltersForHits (#11383)
Summary: In CircleCI build-linux-arm-test-full job (https://app.circleci.com/pipelines/github/facebook/rocksdb/26462/workflows/a9d39d2c-c970-4b0f-9c10-7743beb9771b/jobs/591722), this test exhibited the following flaky failure: ``` db/db_bloom_filter_test.cc:2506: Failure Expected: (TestGetTickerCount(options, BLOOM_FILTER_USEFUL)) > (65000 * 2), actual: 120558 vs 130000 ``` I ssh'd to an instance and observed it cuts memtables at slightly different points across runs. Logging in `ConcurrentArena` pointed to `try_lock()` returning false at different points across runs. This PR changes the approach to allow a fixed number of keys per memtable flush. I verified the bloom filter useful count is deterministic now even on the CircleCI ARM instance. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11383 Reviewed By: cbi42 Differential Revision: D45036829 Pulled By: ajkr fbshipit-source-id: b602dacb63955f1af09bf0ed409cde0552805a08 |
2 years ago |
Murali Vilayannur | 226ee25d30 |
Block fetch CPU time counters in perf context (#11342)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11342 Reviewed By: ajkr Differential Revision: D45026838 Pulled By: mnv104 fbshipit-source-id: 099ed9579922b8fa6e7d3332bbb829d50ec47d91 |
2 years ago |
mayue.fight | 4d72f48e57 |
Fix the wrong calculation of largest_key in import_column_family_job (#11381)
Summary: When calculating the largest_key in ImportColumnFamilyJob::GetIngestedFileInfo, only the first element of range_del_iter is calculated. If range_del_iter has multiple elements, the largest_key will be wrong Pull Request resolved: https://github.com/facebook/rocksdb/pull/11381 Reviewed By: cbi42 Differential Revision: D44981450 Pulled By: ajkr fbshipit-source-id: 584bc7da86295568a96984d2951644f289e578c7 |
2 years ago |
Changyu Bi | ba16e8eee7 |
Try to pick more files in `LevelCompactionBuilder::TryExtendNonL0TrivialMove()` (#11347)
Summary: Before this PR, in `LevelCompactionBuilder::TryExtendNonL0TrivialMove(index)`, we start from a file at index and expand the compaction input towards right to find files to trivial move. This PR adds the logic to also expand towards left. Another major change made in this PR is to not expand L0 files through `TryExtendNonL0TrivialMove()`. This happens currently when compacting L0 files to an empty output level. The condition for expanding files in `TryExtendNonL0TrivialMove()` is to check atomic boundary, which does not take into account that L0 files can overlap in key range and are not sorted in key order. So it may include more L0 files than needed and disallow a trivial move. This change is included in this PR so that we don't make it worse by always expanding L0 in both direction. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11347 Test Plan: * new unit test * Benchmark does not show obvious improvement or regression: ``` Write sequentially ./db_bench --benchmarks=fillseq --compression_type=lz4 --write_buffer_size=1000000 --num=100000000 --value_size=100 -level_compaction_dynamic_level_bytes --target_file_size_base=7340032 --max_bytes_for_level_base=16777216 Main: fillseq : 4.726 micros/op 211592 ops/sec 472.607 seconds 100000000 operations; 23.4 MB/s This PR: fillseq : 4.755 micros/op 210289 ops/sec 475.534 seconds 100000000 operations; 23.3 MB/s Write randomly ./db_bench --benchmarks=fillrandom --compression_type=lz4 --write_buffer_size=1000000 --num=100000000 --value_size=100 -level_compaction_dynamic_level_bytes --target_file_size_base=7340032 --max_bytes_for_level_base=16777216 Main: fillrandom : 16.351 micros/op 61159 ops/sec 1635.066 seconds 100000000 operations; 6.8 MB/s This PR: fillrandom : 15.798 micros/op 63298 ops/sec 1579.817 seconds 100000000 operations; 7.0 MB/s ``` Reviewed By: ajkr Differential Revision: D44645650 Pulled By: cbi42 fbshipit-source-id: 8631f3a6b3f01decbbf18c34f2b62833cb4f9733 |
2 years ago |
mayue.fight | 9500d90d1b |
Fix serval bugs in ImportColumnFamilyTest (#11372)
Summary: **Context/Summary:** ASSERT_EQ will only verify the code of Status, but will not check the state message of Status. - Assert by checking Status state in `ImportColumnFamilyTest` - Forgot to set db_comparator_name when creating ExportImportFilesMetaData in `ImportColumnFamilyNegativeTest` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11372 Reviewed By: ajkr Differential Revision: D45004343 Pulled By: cbi42 fbshipit-source-id: a13d45521df17ead3d6d4c1c1fe1e4c95397ce8b |
2 years ago |
Jeff Palm | 6b67b561bc |
util/ribbon_test.cc: avoid ambiguous reversed operator error in c++20 (#11371)
Summary: util/ribbon_test.cc: avoid ambiguous reversed operator error in c++20 (and enable checking for the error) Code would produce errors like this, when compiled with -Wambiguous-reversed-operator under c++20. ``` util/ribbon_test.cc:695:20: error: ISO C++20 considers use of overloaded operator '!=' (with operand types 'KeyGen' (aka '(anonymous namespace)::StandardKeyGen') and 'KeyGen') to be ambiguou s despite there being a unique best viable function with non-reversed arguments [-Werror,-Wambiguous-reversed-operator] while (cur != batch_end) { ~~~ ^ ~~~~~~~~~ util/ribbon_test.cc:111:8: note: candidate function with non-reversed arguments bool operator!=(const StandardKeyGen& other) { ^ util/ribbon_test.cc:107:8: note: ambiguous candidate function with reversed arguments bool operator==(const StandardKeyGen& other) { ^ ``` This will become a hard error in future standards. Confirmed that no errors were generated when building using clang and c++20: ``` USE_CLANG=1 USE_COROUTINES=1 make ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11371 Reviewed By: meyering Differential Revision: D44921027 Pulled By: cbi42 fbshipit-source-id: ef25b78260920a4d75a718310688d3a2487ffa87 |
2 years ago |
Yu Zhang | 647cd73674 |
Initial add UDT in memtable only option (#11362)
Summary: This option is immutable through the life time of the DB open. For now, updating its value between different DB open sessions is also a non compatible change. When I work on support for updating comparator, the type of updates accepted for this option will be supported then. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11362 Test Plan: `make check` Reviewed By: ltamasi Differential Revision: D44873870 Pulled By: jowlyzhang fbshipit-source-id: aa02094754b58d99abf9af4c9a8108c1350254cb |
2 years ago |
Andrew Kryczka | 760b773f58 |
fix optimization-disabled test builds with platform010 (#11361)
Summary: Fixed the following failure: ``` third-party/gtest-1.8.1/fused-src/gtest/gtest-all.cc: In function ‘bool testing::internal::StackGrowsDown()’: third-party/gtest-1.8.1/fused-src/gtest/gtest-all.cc:8681:24: error: ‘dummy’ may be used uninitialized [-Werror=maybe-uninitialized] 8681 | StackLowerThanAddress(&dummy, &result); | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~ third-party/gtest-1.8.1/fused-src/gtest/gtest-all.cc:8671:13: note: by argument 1 of type ‘const void*’ to ‘void testing::internal::StackLowerThanAddress(const void*, bool*)’ declared here 8671 | static void StackLowerThanAddress(const void* ptr, bool* result) { | ^~~~~~~~~~~~~~~~~~~~~ third-party/gtest-1.8.1/fused-src/gtest/gtest-all.cc:8679:7: note: ‘dummy’ declared here 8679 | int dummy; | ^~~~~ ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11361 Reviewed By: cbi42 Differential Revision: D44838033 Pulled By: ajkr fbshipit-source-id: 27d68b5a24a15723bbaaa7de45ccd70a60fe259e |
2 years ago |
Niklas Fiekas | d5a9c0c937 |
C-API: Constify cache functions where possible (#11243)
Summary: Makes it easier to use generated Rust bindings. Constness of these is already part of the C++ API. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11243 Reviewed By: hx235 Differential Revision: D44840394 Pulled By: ajkr fbshipit-source-id: bcd1aeb8c959c304148d25b00043bb8c4cd3e0a4 |
2 years ago |
Zdenek Korcak | c8552d8c63 |
fix bad implementation of ShardedCache::GetOccupancyCount (#11325)
Summary: copy paste typo Pull Request resolved: https://github.com/facebook/rocksdb/pull/11325 Reviewed By: hx235 Differential Revision: D44378512 Pulled By: ajkr fbshipit-source-id: 509ed2697c06eed975914359ece0459a0ea40312 |
2 years ago |
nccx | d30bb3d14a |
Add PaxosStore to USERS (#11357)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11357 Reviewed By: hx235 Differential Revision: D44774454 Pulled By: ajkr fbshipit-source-id: f3912316b6cd4e0b41310590c93f914f1d943044 |
2 years ago |
leipeng | b2c4bc5f73 |
Makefile: fix a typo: PLATFORM_CFLAGS to PLATFORM_CCFLAGS (#11348)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11348 Reviewed By: hx235 Differential Revision: D44774863 Pulled By: ajkr fbshipit-source-id: ba4bd959650228a71fca6bf62840ae9d7373d6f0 |
2 years ago |
nccx | 140dd93b57 |
Remove deprecated integration tests from README.md (#11354)
Summary: The CI systems other than CircleCI are almost always in a failing state. Since CircleCI covers linux, macos, and windows, we can remove the others. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11354 Reviewed By: hx235 Differential Revision: D44774627 Pulled By: ajkr fbshipit-source-id: c83b298ec5afe4ea410744eda6cc98fc6a3365f1 |
2 years ago |
Changyu Bi | 64cead919f |
Initialize `lowest_unnecessary_level_` in `VersionStorageInfo` constructor (#11359)
Summary: valgrind complains "Conditional jump or move depends on uninitialised value(s)". A sample error message: ``` [ RUN ] DBCompactionTest.DrainUnnecessaryLevelsAfterDBBecomesSmall ==3353864== Conditional jump or move depends on uninitialised value(s) ==3353864== at 0x8647B4: rocksdb::VersionStorageInfo::ComputeCompactionScore(rocksdb::ImmutableOptions const&, rocksdb::MutableCFOptions const&) (version_set.cc:3414) ==3353864== by 0x86B340: rocksdb::VersionSet::AppendVersion(rocksdb::ColumnFamilyData*, rocksdb::Version*) (version_set.cc:4946) ==3353864== by 0x876B88: rocksdb::VersionSet::CreateColumnFamily(rocksdb::ColumnFamilyOptions const&, rocksdb::VersionEdit const*) (version_set.cc:6876) ==3353864== by 0xBA66FE: rocksdb::VersionEditHandler::CreateCfAndInit(rocksdb::ColumnFamilyOptions const&, rocksdb::VersionEdit const&) (version_edit_handler.cc:483) ==3353864== by 0xBA4A81: rocksdb::VersionEditHandler::Initialize() (version_edit_handler.cc:187) ==3353864== by 0xBA3927: rocksdb::VersionEditHandlerBase::Iterate(rocksdb::log::Reader&, rocksdb::Status*) (version_edit_handler.cc:31) ==3353864== by 0x870173: rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool) (version_set.cc:5729) ==3353864== by 0x7538FA: rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*, rocksdb::DBImpl::RecoveryContext*) (db_impl_open.cc:522) ==3353864== by 0x75BA0F: rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool) (db_impl_open.cc:1928) ==3353864== by 0x75A735: rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**) (db_impl_open.cc:1743) ==3353864== by 0x75A510: rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::DB**) (db_impl_open.cc:1720) ==3353864== by 0x5925FD: rocksdb::DBTestBase::TryReopen(rocksdb::Options const&) (db_test_util.cc:710) ==3353864== Uninitialised value was created by a heap allocation ==3353864== at 0x4842F0F: operator new(unsigned long) (vg_replace_malloc.c:422) ==3353864== by 0x876AF4: rocksdb::VersionSet::CreateColumnFamily(rocksdb::ColumnFamilyOptions const&, rocksdb::VersionEdit const*) (version_set.cc:6870) ==3353864== by 0xBA66FE: rocksdb::VersionEditHandler::CreateCfAndInit(rocksdb::ColumnFamilyOptions const&, rocksdb::VersionEdit const&) (version_edit_handler.cc:483) ==3353864== by 0xBA4A81: rocksdb::VersionEditHandler::Initialize() (version_edit_handler.cc:187) ==3353864== by 0xBA3927: rocksdb::VersionEditHandlerBase::Iterate(rocksdb::log::Reader&, rocksdb::Status*) (version_edit_handler.cc:31) ==3353864== by 0x870173: rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool) (version_set.cc:5729) ==3353864== by 0x7538FA: rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*, rocksdb::DBImpl::RecoveryContext*) (db_impl_open.cc:522) ==3353864== by 0x75BA0F: rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool) (db_impl_open.cc:1928) ==3353864== by 0x75A735: rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**) (db_impl_open.cc:1743) ==3353864== by 0x75A510: rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::DB**) (db_impl_open.cc:1720) ==3353864== by 0x5925FD: rocksdb::DBTestBase::TryReopen(rocksdb::Options const&) (db_test_util.cc:710) ==3353864== by 0x591F73: rocksdb::DBTestBase::Reopen(rocksdb::Options const&) (db_test_util.cc:662) ``` This is likely about `lowest_unnecessary_level_` even though it would be initialized in `CalculateBaseBytes()` before being used in `ComputeCompactionScore()`. Initialize it also in VersionStorageInfo constructor to prevent valgrind from complaining. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11359 Test Plan: - ran a test with valgrind which gave the error message above before this PR: `valgrind --track-origins=yes ./db_compaction_test --gtest_filter="*DrainUnnecessaryLevelsAfterDBBecomesSmall*"` Reviewed By: hx235 Differential Revision: D44799112 Pulled By: cbi42 fbshipit-source-id: 557208a66f04a2163b418b2a651bdb7e777c4511 |
2 years ago |
Peter Dillinger | f9db0c6e9c |
Refactor block cache tracing w/improved MultiGet (#11339)
Summary: After https://github.com/facebook/rocksdb/issues/11301, I wasn't sure whether I had regressed block cache tracing with MultiGet. Demo PR https://github.com/facebook/rocksdb/issues/11330 shows the flawed state of tracing MultiGet before my change, and based on the unit test, there was essentially no change in tracing behavior with https://github.com/facebook/rocksdb/issues/11301. This change is to leave that code and behavior better than I found it. This change is not intended to change any production behaviors except when block cache tracing is active, though might improve general read path efficiency by disabling some related tracking when such tracing is disabled. More detail on production code: * Refactoring to consolidate the construction of BlockCacheTraceRecord, and other related functionality, in block-based table reader, though it's somewhat awkward to preserve an optimization to avoid copying Slices into temporary strings in BlockCacheLookupContext. * Accurately track cache hits and misses (etc.) for each data block accessed by a MultiGet(). (Previously reported hits as misses.) * Reduced repeated checking of `block_cache_tracer_` state (by creating lookup_context only when active) for efficiency and to reduce the risk of corner case bugs where tracing is enabled or disabled for different parts of a read op. (See a TODO below) * Improved estimate calculation for num_keys_in_block (see code comment) Possible follow-up: * `XXX:` use_cache=true means double cache query? (possible double-query of block cache when allow_mmap_reads=true) * `TODO:` need more than one lookup_context here to track individual filter and index partition hits and misses * `TODO:` optimize more state checks of `block_cache_tracer_` down to `lookup_context != nullptr` * Pre-existing `XXX:` There appear to be 'break' statements above that bypass this writing of the block cache trace record * Expand test coverage (see below) Pull Request resolved: https://github.com/facebook/rocksdb/pull/11339 Test Plan: * Added a basic unit test for block cache tracing MultiGet, for now just covering one data block with two keys. * Added HitMissCountingCache to independently verify that the actual block cache trace and expected block cache trace also agree with the actual number of cache hits / misses (nothing missing or mislabeled). For now only used with MultiGet test. * Better testing of num_keys_in_block, for now just with MultiGet * Misc improvements to table_test to improve clarity, such as making it clear that certain keys are auto-inserted at the start of every test. Performance test: Testing multireadrandom as in https://github.com/facebook/rocksdb/issues/11301, except averaging over distinct runs rather than [-X30] which doesn't seem to sufficiently reset after each run to work as an independent test run. Base with revert of 11301: 3148926 ops/sec Base: 3019146 ops/sec New: 2999529 ops/sec Possibly a tiny MultiGet CPU regression with this change. We are now always allocating an additional vector for the LookupContexts. I'm still contemplating options to try to correct the regression in https://github.com/facebook/rocksdb/issues/11301. Testing readrandom: Base with revert of 11301: 2311988 Base: 2281726 New: 2299722 Possibly a tiny Get CPU improvement with this change. We are now avoiding some unnecessary LookupContext population. Reviewed By: akankshamahajan15 Differential Revision: D44557845 Pulled By: pdillinger fbshipit-source-id: b841691799d2a48fb59cc8880dc7cbb1e107ae3d |
2 years ago |
Changyu Bi | f631138e1c |
Better support for merge operation with data block hash index (#11356)
Summary: when data block hash index finds a key of op_type `kTypeMerge`, do not redo data block seek. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11356 Test Plan: - added new unit test - crash test: `python3 tools/db_crashtest.py whitebox --simple --use_merge=1 --data_block_index_type=1` - benchmark see slight improvement in read throughput: ``` TEST_TMPDIR=/dev/shm/hashindex ./db_bench -benchmarks=mergerandom -use_existing_db=false -num=10000000 -compression_type=none -level_compaction_dynamic_level_bytes=1 -merge_operator=PutOperator -write_buffer_size=1000000 --use_data_block_hash_index=1 TEST_TMPDIR=/dev/shm/hashindex ./db_bench -benchmarks=readrandom[-X10] -use_existing_db=true -num=10000000 -merge_operator=PutOperator -readonly=1 -disable_auto_compactions=1 -reads=100000 Main: readrandom [AVG 10 runs] : 29526 (± 1118) ops/sec; 2.1 (± 0.1) MB/sec Post-PR: readrandom [AVG 10 runs] : 31095 (± 662) ops/sec; 2.2 (± 0.0) MB/sec ``` Reviewed By: pdillinger Differential Revision: D44759895 Pulled By: cbi42 fbshipit-source-id: 387f0c35938c7e0e96b810ca3babf1967fc68191 |
2 years ago |