Summary:
Regexes are considered potentially problematic for use in
registering RocksDB extensions, so we are removing
ObjectLibrary::Register() and the Regex public API it depended on (now
unused).
In reference to https://github.com/facebook/rocksdb/issues/9389
Why?
* The power of Regexes can make it hard to reason about which extension
will match what. (The replacement API isn't perfect, but we are at least
"holding the line" on patterns we have seen in practice.)
* It is easy to make regexes that don't quite mean what you think they
mean, such as forgetting that the `.` in `foo.bar` can match any character
or that matching is nondeterministic, as in `a🅱️42` matching `.*:[0-9]+`.
* Some regexes and implementations can have disastrously bad
performance. This might not be much practical concern for ObjectLibray
here, but we don't want to encourage potentially dangerous further use
in production code. (Testing code is fine. See TestRegex.)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9439
Test Plan: CI
Reviewed By: mrambacher
Differential Revision: D33792342
Pulled By: pdillinger
fbshipit-source-id: 4f64dcb04764e639162c8977a5fa196f67754cec
Summary:
Add an option to set the WAL compression algorithm - wal_compression.
TODO: WAL compression is not implemented and will only support zstd initially. Will be added in subsequent diffs.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9432
Reviewed By: pdillinger
Differential Revision: D33797275
Pulled By: sidroyc
fbshipit-source-id: 8db81d9c9cea5e2e4f1445d3aecad8106137b8e7
Summary:
**Context:**
Inside `BlockBasedTableBuilder::WriteRawBlock`, there are multiple places that change local variables `io_s` and `s` while
depend on them. This PR attempts to clarify the relevant logics so that it's easier to read and add places of changing these local variables later (like https://github.com/facebook/rocksdb/pull/9342.) without changing the current behavior.
**Summary:**
- Shorten the lifetime of local var `io_s` and `s` as much as possible to avoid if-else branches by early return
**Test**
- Reasoned against original behavior to verify new changes do not break existing behaviors.
- Rely on CI tests since we are not changing current behavior.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9393
Reviewed By: pdillinger
Differential Revision: D33626095
Pulled By: hx235
fbshipit-source-id: 6184d1e1d85d2650d16617c449971988d062ed3f
Summary:
There is a race in SstFileManagerImpl between the ClearError() function
and CancelErrorRecovery(). The race can cause ClearError() to deref the
file system pointer after it has been freed. This is likely to occur
during process shutdown, when the order of destruction of the
DB/Env/FileSystem and SstFileManagerImpl is not deterministic.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9435
Test Plan:
Reproduce the crash in a TSAN build by introducing sleeps in the code, and verify with
the fix.
Reviewed By: siying
Differential Revision: D33774696
Pulled By: anand1976
fbshipit-source-id: 643d3da31b8d2ee6d9b6db5d33327e0053ce3b83
Summary:
RocksDB has marked DB::AddFile() as "DEPRECATED_FUNC" for a long time, and
it will be removed in the upcoming 7.0 release.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9433
Test Plan: make check -j64; CircleCI
Reviewed By: riversand963
Differential Revision: D33763987
Pulled By: akankshamahajan15
fbshipit-source-id: a3407324479bb43689e1213e4e29d53095e7579a
Summary:
db_stress listener service always uses default filesystem to operate,
causing it to not recognize custom filesystem (like ZenFS plugin FS).
Pass the env to db_stress listener with the correct filesystem
information, so it can open the user intended filesystem.
Signed-off-by: Aravind Ramesh <Aravind.Ramesh@wdc.com>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9352
Reviewed By: riversand963
Differential Revision: D33776762
Pulled By: pdillinger
fbshipit-source-id: e79bb9a544384f80ae9dd0108241ab9c83223954
Summary:
Right now, when error happens in block based table reader, we still call index_builder->Finish(), this causes one assertion in one stress test:
db_stress: table/block_based/index_builder.cc:202: virtual rocksdb::Status rocksdb::PartitionedIndexBuilder::Finish(rocksdb::IndexBuilder::IndexBlocks*, const rocksdb::BlockHandle&): Assertion `sub_index_builder_ == nullptr' failed.
This unlikely causes any corruption as we would finally abandon the file, but the code is confusing and it is hard to understand what would happen. Changing the behavior.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9426
Test Plan: Run existing tests
Reviewed By: pdillinger
Differential Revision: D33751929
fbshipit-source-id: 3c916b9444a4171010fc53df40496570bef5ae7a
Summary:
This PR moves RADOS support from RocksDB repo to a separate repo. The new (temporary?) repo
in this PR serves as an example before we finalize the decision on where and who to host RADOS support. At this point,
people can start from the example repo and fork.
The goal is to include this commit in RocksDB 7.0 release.
Reference:
https://github.com/ajkr/dedupfs by ajkr
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9206
Test Plan:
Follow instructions in https://github.com/riversand963/rocksdb-rados-env/blob/main/README.md and build
test binary `env_librados_test` and run it.
Also, make check
Reviewed By: ajkr
Differential Revision: D33751690
Pulled By: riversand963
fbshipit-source-id: 30466c62afa9e4619847a48567ed158e62835e35
Summary:
This PR is one proposal to resolve https://github.com/facebook/rocksdb/issues/9382.
Looking at the code, I can't think of a reason why rdb is an internal component of RocksDB: it does not require
any header files NOT in `include/rocksdb`. It's a better idea to host it somewhere else.
Plus, rdb requires python2 which is not supported any more. No fixes or improvements will be made, even for potential
security bugs (https://www.python.org/doc/sunset-python-2/).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9399
Test Plan: make check
Reviewed By: ajkr
Differential Revision: D33641965
Pulled By: riversand963
fbshipit-source-id: 2a6a74693e5de36834f355e41d6865db206af48b
Summary:
This PR moves HDFS support from RocksDB repo to a separate repo. The new (temporary?) repo
in this PR serves as an example before we finalize the decision on where and who to host hdfs support. At this point,
people can start from the example repo and fork.
Java/JNI is not included yet, and needs to be done later if necessary.
The goal is to include this commit in RocksDB 7.0 release.
Reference:
https://github.com/ajkr/dedupfs by ajkr
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9170
Test Plan:
Follow the instructions in https://github.com/riversand963/rocksdb-hdfs-env/blob/master/README.md. Build and run db_bench and db_stress.
make check
Reviewed By: ajkr
Differential Revision: D33751662
Pulled By: riversand963
fbshipit-source-id: 22b4db7f31762ed417a20239f5a08dcd1696244f
Summary:
We see:
[ RUN ] ChrootEnvWithDirectIO/EnvPosixTestWithParam.RunMany/0
env/env_test.cc:464: Failure
Expected equality of these values:
4
cur
Which is: 0
The suspicious is that the wait time is not long enough. Increase the wait time to 10s and allows earlier check.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9413
Test Plan: Run the test
Reviewed By: riversand963
Differential Revision: D33697715
fbshipit-source-id: 3d71715562a8cceb694b773276dd9e4e451a18bc
Summary:
It appears that VS2017 is covered in CircleCI so we don't need it in Appveyor. Also, currently Appveyor has some problem with installing VS2017.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9417
Test Plan: Watch Appveyor run.
Reviewed By: riversand963
Differential Revision: D33719364
fbshipit-source-id: 7f31bf056eeaf487b372881f85d134dc0fe5832a
Summary:
Loose ends relate to mmap on 32-bit systems. (Testing is more
complicated when the feature was completely disabled on 32-bit.)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9386
Test Plan: CI
Reviewed By: ajkr
Differential Revision: D33590715
Pulled By: pdillinger
fbshipit-source-id: f2637036a538a552200adee65b6765fce8cae27b
Summary:
Fixes a major performance regression in 6.26, where
extra CPU is spent in SliceTransform::AsString when reads involve
a prefix_extractor (Get, MultiGet, Seek). Common case performance
is now better than 6.25.
This change creates a "fast path" for verifying that the current prefix
extractor is unchanged and compatible with what was used to
generate a table file. This fast path detects the common case by
pointer comparison on the current prefix_extractor and a "known
good" prefix extractor (if applicable) that is saved at the time the
table reader is opened. The "known good" prefix extractor is saved
as another shared_ptr copy (in an existing field, however) to ensure
the pointer is not recycled.
When the prefix_extractor has changed to a different instance but
same compatible configuration (rare, odd), performance is still a
regression compared to 6.25, but this is likely acceptable because
of the oddity of such a case. The performance of incompatible
prefix_extractor is essentially unchanged.
Also fixed a minor case (ForwardIterator) where a prefix_extractor
could be used via a raw pointer after being freed as a shared_ptr,
if replaced via SetOptions.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9407
Test Plan:
## Performance
Populate DB with `TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -benchmarks=fillrandom -num=10000000 -disable_wal=1 -write_buffer_size=10000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -prefix_size=12`
Running head-to-head comparisons simultaneously with `TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -use_existing_db -readonly -benchmarks=seekrandom -num=10000000 -duration=20 -disable_wal=1 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -prefix_size=12`
Below each is compared by ops/sec vs. baseline which is version 6.25 (multiple baseline runs because of variable machine load)
v6.26: 4833 vs. 6698 (<- major regression!)
v6.27: 4737 vs. 6397 (still)
New: 6704 vs. 6461 (better than baseline in common case)
Disabled fastpath: 4843 vs. 6389 (e.g. if prefix extractor instance changes but is still compatible)
Changed prefix size (no usable filter) in new: 787 vs. 5927
Changed prefix size (no usable filter) in new & baseline: 773 vs. 784
Reviewed By: mrambacher
Differential Revision: D33677812
Pulled By: pdillinger
fbshipit-source-id: 571d9711c461fb97f957378a061b7e7dbc4d6a76
Summary:
* remove pyenv installation step which is not needed (it takes 3 minutes to install for every job and fail from time to time)
* download compression lib fail from time to time, Uploaded the libs to S3 and download from them for CI, which should be more stable.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9406
Test Plan: CI
Reviewed By: riversand963
Differential Revision: D33700158
Pulled By: jay-zhuang
fbshipit-source-id: be7b172d7cd059c9d7b3139fd7a34f8070460e31
Summary:
Wasn't able to easily reproduce error, but easy to see a race
condition between TestFlushListener::OnFlushCompleted and
DBTestBase::Close(), which frees CF handles before closing DB.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9400
Test Plan: CI etc.
Reviewed By: riversand963
Differential Revision: D33645134
Pulled By: pdillinger
fbshipit-source-id: d0ec914cc43c9e14f53da633876b95b61995138d
Summary:
xcode 11.3.1 is deprecated https://circleci.com/docs/2.0/testing-ios/ , jobs are failing:
```
failed to create host: Image xcode:11.3.0 is not supported
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9405
Test Plan: CI
Reviewed By: ajkr, hx235
Differential Revision: D33674462
Pulled By: jay-zhuang
fbshipit-source-id: 85dd27aad84d26eaaa5c5375015344182b2c50b9
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9404
It is better practice to mark destructors as override. Without this
change there can be issues building with
-Wsuggest-destructor-override.
Reviewed By: riversand963
Differential Revision: D33671992
fbshipit-source-id: 75b0c15010cbab5fbc071c150fef1dc85d5d9d96
Summary:
The old block-based filter has been deprecated for years, but
this makes that more clear by marking the functions specific to it and
logging a warning when the feature is used.
It is deprecated because of performance. In that old design, you have to
binary search through the full SST index before a bloom filter query, which
is much more expensive than a bloom query itself.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9403
Test Plan:
Used db_bench with and without -use_block_based_filter,
running at the same time
TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -benchmarks=fillrandom,readrandom -num=10000000 -duration=20 -disable_wal=1 -write_buffer_size=10000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0
No significant difference in construction time but 3x slower readrandom
with -use_block_based_filter:
readrandom : 100.517 micros/op 9948 ops/sec; 1.1 MB/s
vs.
readrandom : 33.368 micros/op 29968 ops/sec; 3.3 MB/s
Also saw deprecation message (just once) in LOG only with
-use_block_based_filter
Reviewed By: ajkr
Differential Revision: D33673202
Pulled By: pdillinger
fbshipit-source-id: 99f6f0eff619408d9e5f7ef546954ed0be6c7a5b
Summary:
**Context/Summary:**
There are two `RateLimiter::Request()` in public header. One of them is missing some comment that the other one has.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9392
Test Plan: rely on CI test
Reviewed By: pdillinger
Differential Revision: D33623609
Pulled By: hx235
fbshipit-source-id: 42dc06308ff0bcf5ee7ef67e0b1c0172fc239b20
Summary:
Closing https://github.com/facebook/rocksdb/issues/5954
fsync/fdatasync on Linux:
```
(fsync/fdatasync) includes writing through or flushing a disk cache if present.
```
However, on OS X and iOS:
```
(fsync) will flush all data from the host to the drive (i.e. the "permanent storage device"),
the drive itself may not physically write the data to the platters for quite some time and it
may be written in an out-of-order sequence.
```
Solution is to use `fcntl(F_FULLFSYNC)` on OS X so that we get the same
persistence guarantee.
According to OSX man page,
```
The F_FULLFSYNC fcntl asks the drive to flush **all** buffered data to permanent storage.
```
This suggests that it will be no faster than `fsync` on Linux, since Linux, according to its man page,
```
writing through or flushing a disk cache if present
```
It means Linux may not flush **all** data from disk cache.
This is similar to bug reports/fixes in:
- golang: https://github.com/golang/go/issues/26650
- leveldb: 296de8d5b8.
Not sure if we should fallback to fsync since we break persistence contract.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9356
Reviewed By: jay-zhuang
Differential Revision: D33417416
Pulled By: riversand963
fbshipit-source-id: 475548ff9c5eaccde325e0f6842694271cbc8cb7
Summary:
In response to https://github.com/facebook/rocksdb/issues/9354, this PR adds a way for users to "opt out"
of extra checks that can impact peak write performance, which
currently only includes force_consistency_checks. I considered including
some other options but did not see a db_bench performance difference.
Also clarify in comment for force_consistency_checks that it can "slow
down saturated writing."
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9363
Test Plan:
basic coverage in unit tests
Using my perf test in https://github.com/facebook/rocksdb/issues/9354 comment, I see
force_consistency_checks=true -> 725360 ops/s
force_consistency_checks=false -> 783072 ops/s
Reviewed By: mrambacher
Differential Revision: D33636559
Pulled By: pdillinger
fbshipit-source-id: 25bfd006f4844675e7669b342817dd4c6a641e84
Summary:
We are phasing out the slack channel, but keeping the Google
Group email list.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9387
Test Plan: no code
Reviewed By: riversand963
Differential Revision: D33591265
Pulled By: pdillinger
fbshipit-source-id: 48e45a74753d05611db2c8f4efc4de16a1f50e70
Summary:
Replace `-o /dev/null` by `-o test.o` when testing for C++ features such as
-faligned-new otherwise tests will fail with some bugged binutils
(https://sourceware.org/bugzilla/show_bug.cgi?id=19526):
```
output/host/bin/xtensa-buildroot-linux-uclibc-g++ -faligned-new -x c++ - -o /dev/null <<EOF
struct alignas(1024) t {int a;};
int main() {}
EOF
/home/fabrice/buildroot/output/host/lib/gcc/xtensa-buildroot-linux-uclibc/8.3.0/../../../../xtensa-buildroot-linux-uclibc/bin/ld: final link failed: file truncated
```
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6479
Reviewed By: ajkr
Differential Revision: D33574136
Pulled By: riversand963
fbshipit-source-id: 12b48658b17e36013042c98219b89ddf71161d3c
Summary:
Range Locking supports Lock Escalation. Lock Escalation is invoked when
lock memory is nearly exhausted and it reduced the amount of memory used
by joining adjacent locks.
Bridging the gap between certain locks has adverse effects. For example,
in MyRocks it is not a good idea to bridge the gap between locks in
different indexes, as that get the lock to cover large portions of
indexes, or even entire indexes.
Resolve this by introducing Escalation Barrier. The escalation process
will call the user-provided barrier callback function:
bool(const Endpoint& a, const Endpoint& b)
If the function returns true, there's a barrier between a and b and Lock
Escalation will not try to bridge the gap between a and b.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9290
Reviewed By: akankshamahajan15
Differential Revision: D33486753
Pulled By: riversand963
fbshipit-source-id: f97910b67aba0579ea1d35f523ca6863d3dd018e
Summary:
Fixesfacebook/rocksdb#7720
Updated Makefile with flags to define target architecture when compiling/linking,
and added goal `rocksdbjavastaticosxub` to build a OS X Universal Binary native library.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9254
Reviewed By: mrambacher
Differential Revision: D33551160
Pulled By: pdillinger
fbshipit-source-id: 9ce9962e03aacf55014545a6cdf638b5b14b8fa9
Summary:
As title.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9383
Test Plan:
manually add `using namespace` to a file, and run `make check-sources`.
Then, remove `using namespace`, and run `make check-sources`
Reviewed By: ajkr
Differential Revision: D33551706
Pulled By: riversand963
fbshipit-source-id: 1bb8304f38434da7de0656882e62e77673155725
Summary:
Fix https://github.com/facebook/rocksdb/issues/8046 : FlushMemTable return ok but memtable does not synchronize flush. The way to fix it is to expose RecoveryError.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8173
Reviewed By: ajkr
Differential Revision: D31674552
Pulled By: jay-zhuang
fbshipit-source-id: 9d16b69ba12a196bb429332ec8224754de97773d
Summary:
As title.
This is part of an fb-internal task.
First, remove all `using namespace` statements if applicable.
Next, utilize multiple build platforms and see if anything is broken.
Should anything become broken, fix the compilation errors with as little extra change as possible.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9369
Test Plan:
internal build and make check
make clean && make static_lib && cd examples && make all
Reviewed By: pdillinger
Differential Revision: D33517260
Pulled By: riversand963
fbshipit-source-id: 3fc4ce6402a073421dfd9a9b2d1c79441dca7a40
Summary:
With memkind installed, either on a non-fb machine or using `ROCKSDB_NO_FBCODE=1`.
```
ROCKSDB_NO_FBCODE=1 make static_lib
```
Compilation failed due to unused variable warning treated as error. To bypass this, we need to
disable warning-as-error, which is not ideal.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9377
Test Plan: Repeat the above command, and rely on CI.
Reviewed By: ajkr
Differential Revision: D33543343
Pulled By: riversand963
fbshipit-source-id: 9a2790b38c00b8696c7910287f4ae5a9b394341d
Summary:
Compatible change, more natural (especially in generated Rust bindings), no risk that the API will ever need mutable access because it has to make a copy anyway.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9376
Reviewed By: ajkr
Differential Revision: D33541435
Pulled By: pdillinger
fbshipit-source-id: 15c512a0d70b6e8694fa99d598b7d022751c1e59
Summary:
In order to support old-style regex function registration, restored the original "Register<T>(string, Factory)" method using regular expressions. The PatternEntry methods were left in place but renamed to AddFactory. The goal is to allow for the deprecation of the original regex Registry method in an upcoming release.
Added modes to the PatternEntry kMatchZeroOrMore and kMatchAtLeastOne to match * or +, respectively (kMatchAtLeastOne was the original behavior).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9362
Reviewed By: pdillinger
Differential Revision: D33432562
Pulled By: mrambacher
fbshipit-source-id: ed88ab3f9a2ad0d525c7bd1692873f9bb3209d02
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9370
GCC and newer clang, e.g. clang-12 treat `std::unique_ptr` slightly differently.
For the following code
```
#include <iostream>
#include <memory>
#include <type_traits>
struct A {
std::unique_ptr<int> m1;
};
int main()
{
std::cout << std::boolalpha;
std::cout << std::is_standard_layout<A>::value << '\n';
return 0;
}
```
GCC11(C++20) (tested on https://en.cppreference.com/w/cpp/types/is_standard_layout) will print "true", while newer clang, e.g. clang-12 will print "false". This breaks the usage of `offsetof()` on structs with non-static members of type `std::unique_ptr`.
Fixing this by replacing the builtin `offsetof` with a trick documented at https://gist.github.com/graphitemaster/494f21190bb2c63c5516.
Reviewed By: jay-zhuang
Differential Revision: D33420840
fbshipit-source-id: 02bde281dfa28809bec787ad0f7019e85dd9c607
Summary:
This change adds the filename of the offending filen to several place that produce Status objects with code `kCorruption`.
This is not an attempt to have every Corruption message in the codebase extended with the filename, but it is a start.
The motivation for the change was to quickly diagnose which file is corrupted when a large database is openend and there is not option to copy it offsite for analysis, run strace or install the ldb tool.
In the particular case in question, the error message improved from a mere
```
Corruption: checksum mismatch
```
to
```
Corruption: checksum mismatch in file /path/to/db/engine-rocksdb/MANIFEST-000171
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9239
Reviewed By: jay-zhuang
Differential Revision: D33237742
Pulled By: riversand963
fbshipit-source-id: bd42559cfbf786a0a674d091671d1a2bf07bdd31
Summary:
Function `Version::UpdateFilesByCompactionPri()` is never called and not implemented.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8724
Reviewed By: ajkr
Differential Revision: D30643943
Pulled By: riversand963
fbshipit-source-id: 174b2d9a2a42e286222909a035cc74a7b5602335
Summary:
Note: rebase on and merge after https://github.com/facebook/rocksdb/pull/9349, as part of https://github.com/facebook/rocksdb/pull/9342
**Context:**
https://github.com/facebook/rocksdb/pull/9073 charged the hash entries' memory in block cache with `CacheReservationHandle`. However, in the edge case where Ribbon Filter falls back to Bloom Filter and swaps its hash entries to the embedded bloom filter object, the handles associated with those entries are not swapped and thus not released as soon as those entries are cleared during Bloom Filter's finish process.
Although this is a minor issue since RocksDB internal calls `FilterBitsBuilder->Reset()` right after `FilterBitsBuilder->Finish()` on the main path, which releases all the cache reservation related to both the Ribbon Filter and its embedded Bloom Filter, it still worths this fix to avoid confusion.
**Summary:**
- Swapped the `CacheReservationHandle` associated with the hash entries on Ribbon Filter's fallback
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9345
Test Plan: - Added a unit test to verify the number of cache reservation after clearing hash entries, which failed before the change and now succeeds
Reviewed By: pdillinger
Differential Revision: D33377225
Pulled By: hx235
fbshipit-source-id: 7487f4c40dfb6ee7928232021f93ef2c5329cffa
Summary:
Closing https://github.com/facebook/rocksdb/issues/5006
Calling `DB::DestroyColumnFamilyHandle(column_family)` with `column_family` being the return value of
`DB::DefaultColumnFamily()` will return `Status::InvalidArgument()`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9347
Test Plan: make check
Reviewed By: akankshamahajan15
Differential Revision: D33369675
Pulled By: riversand963
fbshipit-source-id: a8266a4daddf2b7a773c2dc7f3eb9a4adfb6b6dd
Summary:
Recently we added the ability to verify some prefix of operations are recovered (AKA no "hole" in the recovered data) (https://github.com/facebook/rocksdb/issues/8966). Besides testing unsynced data loss scenarios, it is also useful to test WAL disabled use cases, where unflushed writes are expected to be lost. Note RocksDB only offers the prefix-recovery guarantee to WAL-disabled use cases that use atomic flush, so crash test always enables atomic flush when WAL is disabled.
To verify WAL-disabled crash-recovery correctness globally, i.e., also in whitebox and blackbox transaction tests, it is possible but requires further changes. I added TODOs in db_crashtest.py.
Depends on https://github.com/facebook/rocksdb/issues/9305.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9338
Test Plan: Running all crash tests and many instances of blackbox. Sandcastle links are in Phabricator diff test plan.
Reviewed By: riversand963
Differential Revision: D33345333
Pulled By: ajkr
fbshipit-source-id: f56dd7d2e5a78d59301bf4fc3fedb980eb31e0ce
Summary:
The LastSequence field in the MANIFEST file is the baseline seqno for a recovered DB. Recovering WAL entries might cause the recovered DB's seqno to advance above this baseline, but the recovered DB will never use a smaller seqno.
Before this PR, we were writing the DB's seqno at the time of LogAndApply() as the LastSequence value. This works in the sense that it is a large enough baseline for the recovered DB that it'll never overwrite any records in existing SST files. At the same time, it's arbitrarily larger than what's needed. This behavior comes from LevelDB, where there was no tracking of largest seqno in an SST file.
Now we know the largest seqno of newly written SST files, so we can write an exact value in LastSequence that actually reflects the largest seqno in any file referred to by the MANIFEST. This is primarily useful for correctness testing with unsynced data loss, where the recovered DB's seqno needs to indicate what records were recovered.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9305
Test Plan:
- https://github.com/facebook/rocksdb/issues/9338 adds crash-recovery correctness testing coverage for WAL disabled use cases
- https://github.com/facebook/rocksdb/issues/9357 will extend that testing to cover file ingestion
- Added assertion at end of LogAndApply() for `VersionSet::descriptor_last_sequence_` consistency with files
- Manually tested upgrade/downgrade compatibility with a custom crash test that randomly picks between a `db_stress` built with and without this PR (for old code it must run with `-disable_wal=0`)
Reviewed By: riversand963
Differential Revision: D33182770
Pulled By: ajkr
fbshipit-source-id: 0bfafaf685f347cc8cb0e1d62e0186340a738f7d
Summary:
Allows the Env to have options (Configurable) and loads like other Customizable classes.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9293
Reviewed By: pdillinger, zhichao-cao
Differential Revision: D33181591
Pulled By: mrambacher
fbshipit-source-id: 55e823886c654d214eda9eedd45ccdc54dac14d7
Summary:
Fixes https://github.com/facebook/rocksdb/issues/9339
When writing SST file, the name, computed as `prefix_extractor->GetId()` will be written to the properties block.
When the SST is opened again in the future, `CreateFromString()` will take the name as argument and try
to create a prefix extractor object. Without this fix, the C API will pass a `Wrapper` pointer to the underlying
DB's `prefix_extractor`. `Wrapper::GetId()`, in this case, will be missing the prefix length component, causing a
prefix extractor of length 0 to be silently created and used.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9343
Test Plan:
```
make c_test
./c_test
```
Reviewed By: mrambacher
Differential Revision: D33355549
Pulled By: riversand963
fbshipit-source-id: c92c3acd8be262c3bff8794b4229e42b9ee31203