Summary:
Provide support in benchmark regression to use different options to be used in async_io benchamark only - "$`MAX_READAHEAD_SIZE`", $`INITIAL_READAHEAD_SIZE`", "$`NUM_READS_FOR_READAHEAD_SIZE`".
If user wants to run set these parameters for all benchmarks then these parameters need to be set in OPTION file instead.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11262
Test Plan: Ran manually
Reviewed By: anand1976
Differential Revision: D43725567
Pulled By: akankshamahajan15
fbshipit-source-id: 28c3462dd785ffd646d44560fa9c92bc6a8066e5
Summary:
`BlobSourceCacheReservationTest.IncreaseCacheReservationOnFullCache` is both flaky and also doesn't do what its name says. The patch changes this test so it actually tests increasing the cache reservation, hopefully also deflaking it in the process.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11273
Test Plan: `make check`
Reviewed By: akankshamahajan15
Differential Revision: D43800935
Pulled By: ltamasi
fbshipit-source-id: 5eb54130dfbe227285b0e14f2084aa4b89f0b107
Summary:
On Linux systems using full ASLR, including CircleCI, the old backtrace()+addr2line stack traces are pretty useless, as seen in some failures under ASSERT_STATUS_CHECKED=1 LIB_MODE=static. Use gdb by default for stack traces under Linux. More detail in code comments.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11272
Test Plan: manual testing locally and on CircleCI with ssh
Reviewed By: anand1976
Differential Revision: D43786211
Pulled By: pdillinger
fbshipit-source-id: f8c7c77f774b504fbdf7c786ff2430cbc8f5b939
Summary:
Hi. :) Noticed we are copying ColumnFamilyDescriptor here because my process crashed during copy constructor (cause unrelated)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10978
Reviewed By: cbi42
Differential Revision: D41473924
Pulled By: ajkr
fbshipit-source-id: 58a3473f2d7b24918f79d4b2726c20081c5e95b4
Summary:
The patch makes the following changes to the API comments:
* Some general comments about snapshots, thread safety, and user-defined timestamps are moved to a more prominent place at the top of the file.
* Detailed descriptions are added for each `ValueType` and `Decision`, fixing and extending some existing comments (e.g. that of `kRemove`, which suggested that key-values are simply removed from the output, while in reality base values are converted to tombstones) and adding detailed comments that were missing (e.g. `kPurge` and `kChangeWideColumnEntity`).
* Updated/extended the comments of `FilterV2/V3` and `FilterBlobByKey`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11261
Reviewed By: akankshamahajan15
Differential Revision: D43714314
Pulled By: ltamasi
fbshipit-source-id: 835f4b1bdac1ce0e291155186095211303260729
Summary:
During backward iteration, blob verification would fail because the user key (ts included) in `saved_key_` doesn't match the blob. This happens because during`FindValueForCurrentKey`, `saved_key_` is not updated when the user key(ts not included) is the same for all cases except when `timestamp_lb_` is specified. This breaks the blob verification logic when user defined timestamp is enabled and `timestamp_lb_` is not specified. Fix this by always updating `saved_key_` when a smaller user key (ts included) is seen.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11258
Test Plan:
`make check`
`./db_blob_basic_test --gtest_filter=DBBlobWithTimestampTest.IterateBlobs`
Run db_bench (built with DEBUG_LEVEL=0) to demonstrate that no overhead is introduced with:
`./db_bench -user_timestamp_size=8 -db=/dev/shm/rocksdb -disable_wal=1 -benchmarks=fillseq,seekrandom[-W1-X6] -reverse_iterator=1 -seek_nexts=5`
Baseline:
- seekrandom [AVG 6 runs] : 72188 (± 1481) ops/sec; 37.2 (± 0.8) MB/sec
With this PR:
- seekrandom [AVG 6 runs] : 74171 (± 1427) ops/sec; 38.2 (± 0.7) MB/sec
Reviewed By: ltamasi
Differential Revision: D43675642
Pulled By: jowlyzhang
fbshipit-source-id: 8022ae8522d1f66548821855e6eed63640c14e04
Summary:
Add more stats for better visibility into the usefulness of the secondary cache.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11246
Test Plan: Add a new unit test
Reviewed By: akankshamahajan15
Differential Revision: D43521364
Pulled By: anand1976
fbshipit-source-id: a92f04884e738a9bf40ad4047acaaaea343838a7
Summary:
This makes it possible to eliminate some copies in `GetEntity` / `MultiGetEntity`,
in particular when `Merge`s or blobs are involved.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11248
Test Plan: `make check`
Reviewed By: akankshamahajan15
Differential Revision: D43544215
Pulled By: ltamasi
fbshipit-source-id: bc4c8955a24bbd8bc4ab098e72133ead757f9707
Summary:
Stressing small DB with small number of keys and user-defined timestamp enabled usually fails pretty quickly in TestGet.
Example command to reproduce the failure:
` tools/db_crashtest.py blackbox --enable_ts --simple --delrangepercent=0 --delpercent=5 --max_key=100 --interval=3 --write_buffer_size=262144 --target_file_size_base=262144 --max_bytes_for_level_base=262144 --subcompactions=1`
Example failure: `error : inconsistent values for key 0000000000000009000000000000000A7878: expected state has the key, Get() returns NotFound.`
Fixes this test failure by refreshing the read up to timestamp to the most up to date timestamp, a.k.a now, after a key is locked. Without this, things could happen in this order and cause a test failure:
<table>
<tr>
<th>TestGet thread</th>
<th> A writing thread</th>
</tr>
<tr>
<td>read_opts.timestamp = GetNow()</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Lock key, do write</td>
</tr>
<tr>
<td>Lock key, read(read_opts) return NotFound</td>
<td></td>
</tr>
</table>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11249
Reviewed By: ltamasi
Differential Revision: D43551302
Pulled By: jowlyzhang
fbshipit-source-id: 26877ab379bdb97acd2682a2632bc29718427f38
Summary:
A second attempt after https://github.com/facebook/rocksdb/issues/10802, with bug fixes and refactoring. This PR updates compaction logic to take range tombstones into account when determining whether to cut the current compaction output file (https://github.com/facebook/rocksdb/issues/4811). Before this change, only point keys were considered, and range tombstones could cause large compactions. For example, if the current compaction outputs is a range tombstone [a, b) and 2 point keys y, z, they would be added to the same file, and may overlap with too many files in the next level and cause a large compaction in the future. This PR also includes ajkr's effort to simplify the logic to add range tombstones to compaction output files in `AddRangeDels()` ([https://github.com/facebook/rocksdb/issues/11078](https://github.com/facebook/rocksdb/pull/11078#issuecomment-1386078861)).
The main change is for `CompactionIterator` to emit range tombstone start keys to be processed by `CompactionOutputs`. A new class `CompactionMergingIterator` is introduced to replace `MergingIterator` under `CompactionIterator` to enable emitting of range tombstone start keys. Further improvement after this PR include cutting compaction output at some grandparent boundary key (instead of the next output key) when cutting within a range tombstone to reduce overlap with grandparents.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11113
Test Plan:
* added unit test in db_range_del_test
* crash test with a small key range: `python3 tools/db_crashtest.py blackbox --simple --max_key=100 --interval=600 --write_buffer_size=262144 --target_file_size_base=256 --max_bytes_for_level_base=262144 --block_size=128 --value_size_mult=33 --subcompactions=10 --use_multiget=1 --delpercent=3 --delrangepercent=2 --verify_iterator_with_expected_state_one_in=2 --num_iterations=10`
Reviewed By: ajkr
Differential Revision: D42655709
Pulled By: cbi42
fbshipit-source-id: 8367e36ef5640e8f21c14a3855d4a8d6e360a34c
Summary:
Fix complain
```
db/db_impl/db_impl_compaction_flush.cc:417:19: error: loop variable 'bg_flush_arg' of type 'const rocksdb::DBImpl::BGFlushArg' creates a copy from type
'const rocksdb::DBImpl::BGFlushArg' [-Werror,-Wrange-loop-analysis]
for (const auto bg_flush_arg : bg_flush_args) {
^
db/db_impl/db_impl_compaction_flush.cc:417:8: note: use reference type 'const rocksdb::DBImpl::BGFlushArg &' to prevent copying
for (const auto bg_flush_arg : bg_flush_args) {
^~~~~~~~~~~~~~~~~~~~~~~~~
&
db/db_impl/db_impl_compaction_flush.cc:2911:21: error: loop variable 'bg_flush_arg' of type 'const rocksdb::DBImpl::BGFlushArg' creates a copy from type
'const rocksdb::DBImpl::BGFlushArg' [-Werror,-Wrange-loop-analysis]
for (const auto bg_flush_arg : bg_flush_args) {
^
db/db_impl/db_impl_compaction_flush.cc:2911:10: note: use reference type 'const rocksdb::DBImpl::BGFlushArg &' to prevent copying
for (const auto bg_flush_arg : bg_flush_args) {
^~~~~~~~~~~~~~~~~~~~~~~~~
&
```
from
```sh
xxx@MacBook-Pro / % g++ -v
Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 12.0.0 (clang-1200.0.32.29)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11240
Reviewed By: cbi42
Differential Revision: D43458729
Pulled By: ajkr
fbshipit-source-id: 26e110f83451509463a1bc308f737ccb693c9f45
Summary:
8.0.fb branch is cut so changes going forward will be part of 8.1. Updated version.h and HISTORY.md accordingly
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11238
Reviewed By: cbi42
Differential Revision: D43428345
Pulled By: ajkr
fbshipit-source-id: d344b6e504c81a85563ae9d3705b11c533b1cd43
Summary:
IO uring usage is causing crash test failures due to bad cqe data being returned in the uring. Revert the change to enable IO uring in db_stress, and also re-enable async_io in CircleCI so that code path can be tested. Added the -use_io_uring flag to db_stress that, when false, will wrap the default env in db_stress to emulate async IO.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11242
Reviewed By: akankshamahajan15
Differential Revision: D43470569
Pulled By: anand1976
fbshipit-source-id: 7c69ac3f53a79ade31d37313f815f1a4b6108b75
Summary:
in DBIter::SeekToLast(), key() can be called when iter is invalid and fails the following assertion:
```
./db/db_iter.h:153: virtual rocksdb::Slice rocksdb::DBIter::key() const: Assertion `valid_' failed.
```
This happens when `iterate_upper_bound` and timestamp_lb_ are set. SeekForPrev(*iterate_upper_bound_) positions the iterator on the same user key as *iterate_upper_bound_. A subsequent PrevInternal() call makes the iterator invalid just be the call to key().
This PR fixes this issue by setting updating the seek key to have max sequence number AND max timestamp when the seek key has the same user key as *iterate_upper_bound_.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11223
Test Plan: - Added a unit test that would fail the above assertion before this fix.
Reviewed By: jowlyzhang
Differential Revision: D43283600
Pulled By: cbi42
fbshipit-source-id: 0dd3999845b722584679bbc95be2664b266005ba
Summary:
the comment for option `periodic_compaction_seconds` only mentions support for Leveled and FIFO compaction, while the implementation supports all compaction styles after https://github.com/facebook/rocksdb/issues/5970. This PR updates comment to reflect this.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11227
Reviewed By: ajkr
Differential Revision: D43325046
Pulled By: cbi42
fbshipit-source-id: 2364dcb5a01cd098ad52c818fe10d621445e2188
Summary:
This PR adds support to the c-api bindings for calling `Flush()` with multiple column families, which is useful for performing atomic flushes (assuming also that the db has been opened with `atomic_flush = true`).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11112
Reviewed By: cbi42
Differential Revision: D42666382
Pulled By: ajkr
fbshipit-source-id: 82f05bf32d28452d85c79ea42411c8fea961fd87
Summary:
I couldn't figure out why this causes failures in our 8.0 release to fbcode while this issue appears to not be new in 8.0. Anyways, we can add the missing `override` keywords to these functions as the compiler insists.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11232
Reviewed By: pdillinger
Differential Revision: D43420656
Pulled By: ajkr
fbshipit-source-id: da748eeef6ba38dd113dbe4b5143d7558daf38dd
Summary:
The problem
-------------
ComparatorOptions is AutoCloseable.
AbstractComparator does not hold a reference to its ComparatorOptions, but the native C++ ComparatorJniCallback holds a reference to the ComparatorOptions’ native C++ options structure. This gets deleted when the ComparatorOptions is closed, either explicitly, or as part of try-with-resources.
Later, the deleted C++ options structure gets used by the callback and the comparator options are effectively random.
The original bug report https://github.com/facebook/rocksdb/issues/8715 was caused by a GC-initiated finalization closing the still-in-use ComparatorOptions. As of 7.0, finalization of RocksDB objects no longer closes them, which worked round the reported bug, but still left ComparatorOptions with a potentially broken lifetime.
In any case, we encourage API clients to use the try-with-resources model, and so we need it to work. And if they don't use it, they leak resources.
The solution
-------------
The solution implemented here is to make a copy of the native C++ options object into the ComparatorJniCallback, rather than a reference. Then the deletion of the native object held by ComparatorOptions is *correctly* deleted when its scope is closed in try/finally.
Testing
-------
We added a regression unit test based on the original test for the reported ticket.
This checkin closes https://github.com/facebook/rocksdb/issues/8715
We expect that there are more instances of "lifecycle" bugs in the Java API. They are a major source of support time/cost, and we note that they could be addressed as a whole using the model proposed/prototyped in https://github.com/facebook/rocksdb/pull/10736
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11176
Reviewed By: cbi42
Differential Revision: D43160885
Pulled By: pdillinger
fbshipit-source-id: 60b54215a02ad9abb17363319650328c00a9ad62
Summary:
The primary purpose of the FactoryFunc was to support LITE mode where the ObjectRegistry was not available. With the removal of LITE mode, the function was no longer required.
Note that the MergeOperator had some private classes defined in header files. To gain access to their constructors (and name methods), the class definitions were moved into header files.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11203
Reviewed By: cbi42
Differential Revision: D43160255
Pulled By: pdillinger
fbshipit-source-id: f3a465fd5d1a7049b73ecf31e4b8c3762f6dae6c
Summary:
From HISTORY.md: Added a subcode of `Status::Corruption`, `Status::SubCode::kMergeOperatorFailed`, for users to identify corruption failures originating in the merge operator, as opposed to RocksDB's internally identified data corruptions.
This is a followup to https://github.com/facebook/rocksdb/issues/11092, where we gave users the ability to keep running a DB despite merge operator failing. Now that the DB keeps running despite such failures, they want to be able to distinguish such failures from real corruptions.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11231
Test Plan: updated unit test
Reviewed By: akankshamahajan15
Differential Revision: D43396607
Pulled By: ajkr
fbshipit-source-id: 17fbcc779ad724dafada8abd73efd38e1c5208b9
Summary:
- Return NotSupported in scan if IOUring not supported if async_io is enabled
- Enable IOUring in db_stress for async_io testing
- Disable async_io in circleci crash testing as circleci doesn't support IOUring
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11197
Test Plan: CircleCI jobs
Reviewed By: anand1976
Differential Revision: D43096313
Pulled By: akankshamahajan15
fbshipit-source-id: c2c53a87636950c0243038b9f5bd0d91608e4fda
Summary:
Added `do_not_compress_roles` to `CompressedSecondaryCacheOptions` to disable compression on certain kinds of block. Filter blocks are now not compressed by CompressedSecondaryCache by default.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11204
Test Plan: unit test added
Reviewed By: anand1976
Differential Revision: D43147698
Pulled By: pdillinger
fbshipit-source-id: db496975ae975fa18f157f93fe131a16315ac875
Summary:
Enough users of NewJemallocNodumpAllocator() with cache.h to justify keeping it. (Reverting one little part of https://github.com/facebook/rocksdb/issues/11192)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11229
Test Plan: CI
Reviewed By: ajkr
Differential Revision: D43337140
Pulled By: pdillinger
fbshipit-source-id: 886b27b96b395619a4209f51b9b7787f4fe89e57
Summary:
The new `MultiGetEntity` API can be used to get a consistent view of
a batch of keys, with the results presented as wide-column entities.
Similarly to `GetEntity` and the iterator's `columns` API, if the entry
corresponding to the key is a wide-column entity to start with, it is
returned as-is, and if it is a plain key-value, it is wrapped into an entity
with a single default column.
Implementation-wise, the new API shares the logic of the batched `MultiGet`
API (via the `MultiGetCommon` methods). Both single-CF and multi-CF
`MultiGetEntity` APIs are provided, and blobs are also supported.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11222
Test Plan: `make check`
Reviewed By: akankshamahajan15
Differential Revision: D43256950
Pulled By: ltamasi
fbshipit-source-id: 47fb2cb7e2d0470e3580f43fdb2fe9e51f0e7005
Summary:
Fix regression script for async_io benchmark using incorrect ops and threads and wrong benchmark name during reporting results.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11224
Test Plan: Ran manually
Reviewed By: anand1976
Differential Revision: D43287658
Pulled By: akankshamahajan15
fbshipit-source-id: 433e2caa0e51268e72a875549ab8f7f92a7a4216
Summary:
One system reports that a dependency in docs/Gemfile.lock is out-of-date and has a risk. I don't see a point of having Gemfile.lock checked in and dealing with dependencies all the time at all. It should be able to regenerated using `bundle install`. Update Gemfile file to a later version too.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11173
Test Plan:
Run
bundle install
bundle exec jekyll serve --host=0.0.0.0
and see website working locally.
Reviewed By: ajkr
Differential Revision: D42897698
fbshipit-source-id: aeaf065c28b8f6582f1af1b5ffbbd5fa194afe24
Summary:
I missed a stress test code sanity check when enabling this combination of tests. This PR addresses that, the "iter_start_ts" function for user defined timestamp feature is not supported when BlobDB is enabled. It's disabled for now.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11199
Test Plan:
Locally always enable BlobDB and run
tools/db_crashtest.py --stress_cmd=./db_stress --cleanup_cmd='' --enable_ts whitebox --random_kill_odd 888887
Reviewed By: ltamasi
Differential Revision: D43245657
Pulled By: jowlyzhang
fbshipit-source-id: 4cae19817bb1afd50a76f9e0e49f006fb5c0b211
Summary:
The files in `port/`, such as `port_posix.h`, are layering over the system libraries, so shouldn't include the DB-specific files like `options.h`. This PR remove this dependency.
# How
The reason that `port_posix.h` (or `port_win.h`) include `options.h` is to use `CpuPriority`, as there is a method `SetCpuPriority()` in `port_posix.h` that uses `CpuPriority.`
- I think `SetCpuPriority()` make sense to exist in `port_posix.h` as it provides has platform-dependent implementation
- `CpuPriority` enum is defined in `env.h`, but used in `rocksdb/include` and `port/`.
Hence, let us define `CpuPriority` enum in a common file, say `port_defs.h`, such that both directories `rocksdb/include` and `port/` can include.
When we remove this dependency, some other files have compile errors because they can't find definitions, so add header files to resolve
# Test
make all check -j
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11214
Reviewed By: pdillinger
Differential Revision: D43196910
Pulled By: guowentian
fbshipit-source-id: 70deccb72844cfb08fcc994f76c6ef6df5d55ab9
Summary:
Same as title
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11215
Test Plan: Ran manually
Reviewed By: pdillinger
Differential Revision: D43194634
Pulled By: akankshamahajan15
fbshipit-source-id: 336a08a9076b222d7000e4eb2a87fc36b863b05b
Summary:
The definition of the Cache class should not be needed by the vast majority of RocksDB users, so I think it is just distracting to include it in cache.h, which is primarily needed for configuring and creating caches. This change moves the class to a new header advanced_cache.h. It is just cut-and-paste except for modifying the class API comment.
In general, operations on shared_ptr<Cache> should continue to work when only a forward declaration of Cache is available, as long as all the Cache instances provided are already shared_ptr. See https://stackoverflow.com/a/17650101/454544
Also, the most common way to customize a Cache is by wrapping an existing implementation, so it makes sense to provide CacheWrapper in the public API. This was a cut-and-paste job except removing the implementation of Name() so that derived classes must provide it.
Intended follow-up: consolidate Release() into one function to reduce customization bugs / confusion
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11192
Test Plan: `make check`
Reviewed By: anand1976
Differential Revision: D43055487
Pulled By: pdillinger
fbshipit-source-id: 7b05492df35e0f30b581b4c24c579bc275b6d110
Summary:
Example failure:
```
[ RUN ] DBWriteTestInstance/DBWriteTest.LockWALInEffect/1
db/db_write_test.cc:646: Failure
Put("key3", "value")
Corruption: Not active
```
Presumably from a background compaction prior to Put.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11209
Test Plan: watch CI
Reviewed By: akankshamahajan15
Differential Revision: D43147727
Pulled By: pdillinger
fbshipit-source-id: a1c34ac5ab124bfe2f23205a30777990056e9082
Summary:
In anticipation of using this to represent sets of CacheEntryRole for including or excluding kinds of blocks in block cache tiers, add significant new features to SmallEnumSet, including at least:
* List initialization
* Applicative constexpr operations
* copy/move/equality ops
* begin/end/const_iterator for iteration
* Better comments
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11178
Test Plan: unit tests added/expanded
Reviewed By: ltamasi
Differential Revision: D42973723
Pulled By: pdillinger
fbshipit-source-id: 40783486feda931c3f7c6fcc9a300acd6a4b0a0a
Summary:
We've seen many instances of
build-linux-static_lib-alt_namespace-status_checked failing like this:
```
g++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make: *** [Makefile:2507: utilities/transactions/transaction_test.o]
Error 1
```
It's understandable that so many static linking jobs could exhaust memory.
The executor only has 16 vcores, so going from 32 down to 24 shouldn't hurt build time.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11206
Test Plan: will watch CI
Reviewed By: ajkr
Differential Revision: D43137246
Pulled By: pdillinger
fbshipit-source-id: 050b0f700c285dd913bcae8b4a76a44d04bb0356
Summary:
Fix a bug in the calculation of the input buffer address/offset in log_reader.cc. The bug is when consecutive fragments of a compressed record are located at the same offset in the log reader buffer, the second fragment input buffer is treated as a leftover from the previous input buffer. As a result, the offset in the `ZSTD_inBuffer` is not reset.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11198
Test Plan: Add a unit test in log_test.cc that fails without the fix and passes with it.
Reviewed By: ajkr, cbi42
Differential Revision: D43102692
Pulled By: anand1976
fbshipit-source-id: aa2648f4802c33991b76a3233c5a58d4cc9e77fd
Summary:
The patch adds compaction filter support for wide-column entities by introducing
a new `CompactionFilter` API called `FilterV3`. This API is called for regular
key-values, merge operands, and wide-column entities as well. It is passed the
existing value/operand or wide-column structure and it can update the value or
columns or keep/delete/etc. the key-value as usual. For compatibility, the default
implementation of `FilterV3` keeps all wide-column entities and falls back to calling
`FilterV2` for plain old key-values and merge operands.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11196
Test Plan: `make check`
Reviewed By: akankshamahajan15
Differential Revision: D43094147
Pulled By: ltamasi
fbshipit-source-id: 75acabe9a35254f7f404ba6173ee9c2774382ebd
Summary:
**Context/Summary:**
As instructed by convenience.h comments, a few deprecated APIs are removed.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11120
Test Plan:
- make check & CI
- eyeball check on test semantics.
Reviewed By: pdillinger
Differential Revision: D42937507
Pulled By: hx235
fbshipit-source-id: a9e4709387da01b1d0e9148c2e210f02e9746ee1
Summary:
Continuous performance testing indicates there's a small performance hit with shared library (-fPIC) builds, so while retaining the motivation for https://github.com/facebook/rocksdb/issues/11168, we set the default for DEBUG_LEVEL=0 Makefile builds back to LIB_MODE=static.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11195
Test Plan: CI, with some updated checks and removal of some now obsolete LIB_MODE overrides
Reviewed By: cbi42
Differential Revision: D43090576
Pulled By: pdillinger
fbshipit-source-id: 755fe5d07005f85caf24e16f90228ffd46a6e250
Summary:
Currently the option of "KForceOptimized" is not included in CompactRangeOptions.BottommostLevelCompaction.
This PR is to add this option.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11181
Reviewed By: ajkr
Differential Revision: D43056453
Pulled By: cbi42
fbshipit-source-id: 22fd53f980ab1a86c61dd42e948902542065128f
Summary:
Need to scp the .so files. Switched to tar+ssh to support symlinks, faster handling of multiple files, and compression.
Also fixing some holes in 'make clean' as I've noticed files like 'librocksdb.so.7.7.0', 'librocksdb_test_debug.so', 'librocksdb_tools_debug.so' hanging around after `make clean`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11194
Test Plan:
Manually triggered regression test runs with change, manual `make clean`
https://fburl.com/sandcastle/gnxy5lvchttps://fburl.com/sandcastle/4pxodwh7
Reviewed By: cbi42
Differential Revision: D43069065
Pulled By: pdillinger
fbshipit-source-id: 48552b5980956784a1fdb40638d9e8ad6db51900