Summary:
ForwardIterator::SVCleanup() sometimes didn't pin superversion when it was supposed to. See the added test for the scenario. Here's the ASAN output of the added test without the fix (using `COMPILE_WITH_ASAN=1 make`): https://pastebin.com/9rD0Ywws
Closes https://github.com/facebook/rocksdb/pull/3415
Differential Revision: D6817414
Pulled By: al13n321
fbshipit-source-id: bc80c44ea78a3a1fa885dfa448a26111f91afb24
Summary:
Add Java-side copy constructors for:
- Options
- DBOptions
- ColumnFamilyOptions
- WriteOptions
along with unit tests to assert the copy worked.
NOTE: Unit tests are failing in travis but it looks like a global timeout issue. These tests pass.
Closes https://github.com/facebook/rocksdb/pull/3450
Differential Revision: D6874425
Pulled By: sagar0
fbshipit-source-id: 5bde68ea5b5225e071faea2628bf8bbf10bd65ab
Summary:
options/cf_options.cc:
77 memtable_insert_with_hint_prefix_extractor(
CID 1396208 (#1 of 1): Uninitialized scalar field (UNINIT_CTOR)
2. uninit_member: Non-static class member info_log_level is not initialized in this constructor nor in any functions that it calls.
Closes https://github.com/facebook/rocksdb/pull/3106
Differential Revision: D6874689
Pulled By: sagar0
fbshipit-source-id: b5cd2d13915fd86d87260050f9c5d117615bbe30
Summary:
Per some discussions, this will switch our Appveyor testing to use Visual Studio 2017.
Closes https://github.com/facebook/rocksdb/pull/3445
Differential Revision: D6874918
Pulled By: gfosco
fbshipit-source-id: c5a0032ca9f37f0d3baeae35c59d850d528c3176
Summary:
`ReadaheadRandomAccessFile` is used by iterators for file reads in several cases, like in compaction when `compaction_readahead_size > 0` or `use_direct_io_for_flush_and_compaction == true`, or in user iterator when `ReadOptions::readahead_size > 0`. `ReadaheadRandomAccessFile` maintains an internal buffer for readahead data. It assumes that, if the buffer's length is less than `ReadaheadRandomAccessFile::readahead_size_`, which is fixed in the constructor, then EOF has been reached so it doesn't try reading further.
Recently, d938226af4 started calling `RandomAccessFile::Prefetch` with various lengths: 8KB, 16KB, etc. When the `RandomAccessFile` is a `ReadaheadRandomAccessFile`, it triggers the above condition and incorrectly determines EOF. If a block is partially in the readahead buffer and EOF is incorrectly decided, the result is a truncated data block.
The problem is reproducible:
```
TEST_TMPDIR=/data/compaction_bench ./db_bench -benchmarks=fillrandom -write_buffer_size=1048576 -target_file_size_base=1048576 -block_size=18384 -use_direct_io_for_flush_and_compaction=true
...
put error: Corruption: truncated block read from /data/compaction_bench/dbbench/000014.sst offset 20245, expected 10143 bytes, got 8427
```
Closes https://github.com/facebook/rocksdb/pull/3454
Differential Revision: D6869405
Pulled By: ajkr
fbshipit-source-id: 87001c299e7600a37c0dcccbd0368e0954c929cf
Summary:
Currently DB does not accept duplicate keys (keys with the same user key and the same sequence number). If Memtable returns false when receiving such keys, we can benefit from this signal to properly increase the sequence number in the rare cases when we have a duplicate key in the write batch written to DB under WritePrepared transactions.
Closes https://github.com/facebook/rocksdb/pull/3418
Differential Revision: D6822412
Pulled By: maysamyabandeh
fbshipit-source-id: adea3ce5073131cd38ed52b16bea0673b1a19e77
Summary:
Previously `ReadBlockFromFile` for data blocks was only measured when reading a block to populate block cache. This PR adds the corresponding measurements for users who disabled block cache.
Closes https://github.com/facebook/rocksdb/pull/3442
Differential Revision: D6848671
Pulled By: ajkr
fbshipit-source-id: bb4bbe1797fa2cc1d9a5bad44891af2b55384b41
Summary:
By default if ubsan detects any problem, it outputs a “runtime error:” message, and in most cases continues executing the program.
In order to make test abort on errors, option `-fno-sanitize-recover` is needed. [link](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html)
Closes https://github.com/facebook/rocksdb/pull/3447
Differential Revision: D6854654
Pulled By: miasantreble
fbshipit-source-id: c48e892b0b38307029df38a67adda0e24257e481
Summary:
Updated the test case to handle tmpfs mounted at directories different from "/dev/shm/".
Closes https://github.com/facebook/rocksdb/pull/3440
Differential Revision: D6848213
Pulled By: ajkr
fbshipit-source-id: 465e9dbf0921d0930161f732db6b3766bb030589
Summary:
Without this patch, ubsan_check is currently failing with this error:
```
utilities/transactions/write_prepared_transaction_test.cc:369:63: runtime error: member call on address 0x0000051649f8 which does not point to an object of type 'WithParamInterface'
0x0000051649f8: note: object has invalid vptr
```
Tested by `COMPILE_WITH_UBSAN=1 make -j32 transaction_test` and running `./write_prepared_transaction_test --gtest_filter=TwoWriteQueues/SnapshotConcurrentAccessTest.SnapshotConcurrentAccessTest1/0`
Closes https://github.com/facebook/rocksdb/pull/3444
Differential Revision: D6850087
Pulled By: maysamyabandeh
fbshipit-source-id: 5b254da8504b8757f7aec8a820ad464154da1a1d
Summary:
Using `DeleteFilesInRange` to delete files in a lot of ranges can be slow, because
`VersionSet::LogAndApply` is expensive.
This PR adds a new `DeleteFilesInRange` function to delete files in multiple
ranges at once.
Close https://github.com/facebook/rocksdb/issues/2951
Closes https://github.com/facebook/rocksdb/pull/3431
Differential Revision: D6849228
Pulled By: ajkr
fbshipit-source-id: daeedcabd8def4b1d9ee95a58266dee77b5d68cb
Summary:
previously if `checkpoint_dir` contained a trailing slash, we'd attempt to create the `.tmp` directory under `checkpoint_dir` due to simply concatenating `checkpoint_dir + ".tmp"`. This failed because `checkpoint_dir` hadn't been created yet and our directory creation is non-recursive. This PR fixes the issue by always creating the `.tmp` directory in the same parent as `checkpoint_dir` by stripping trailing slashes before concatenating.
Closes https://github.com/facebook/rocksdb/pull/3275
Differential Revision: D6574952
Pulled By: ajkr
fbshipit-source-id: a6daa6777a901eac2460cd0140c9515f7241aefc
Summary:
In the test, there can be a dead lock between background flush thread and foreground main thread as following:
* background flush thread:
- holding db mutex, while
- waiting on "DBImpl::FlushMemTableToOutputFile:BeforeInstallSV" sync point.
* foreground thread:
- waiting for db mutex to write "key2"
Fixing by let background flush thread wait without holding db mutex.
Closes https://github.com/facebook/rocksdb/pull/3436
Differential Revision: D6841334
Pulled By: yiwu-arbug
fbshipit-source-id: b020768ac94e166e40953c5d09e505515a5f244d
Summary:
SnapshotConcurrentAccessTest sometimes times out when running on the test infra. This patch splits the test into smaller sub-tests to avoid the timeout. It also benefits from lower run-time of each sub-test and increases the coverage of the test. The overall run-time of each final sub-test is at most half of the original test so we should no longer see a timeout.
Closes https://github.com/facebook/rocksdb/pull/3435
Differential Revision: D6839427
Pulled By: maysamyabandeh
fbshipit-source-id: d53fdb157109e2438ca7fe447d0cf4b71f304bd8
Summary:
Added a test for three dynamic universal compaction options, in the realm of read amplification:
- size_ratio
- min_merge_width
- max_merge_width
Also updated DynamicUniversalCompactionSizeAmplification by adding a check on compaction reason.
Found a bug in compaction reason setting while working on this PR, and fixed in #3412 .
TODO for later: Still to add tests for these options: compression_size_percent, stop_style and trivial_move.
Closes https://github.com/facebook/rocksdb/pull/3419
Differential Revision: D6822217
Pulled By: sagar0
fbshipit-source-id: 074573fca6389053cbac229891a0163f38bb56c4
Summary:
ReadOptions.fill_cache is set in compaction inputs and can be set by users in their queries too. It tells RocksDB not to put a data block used to block cache.
The memory used by the data block is, however, not trackable by users.
To make the system more manageable, we can cost the block to block cache while using it, and then release it after using.
Closes https://github.com/facebook/rocksdb/pull/3333
Differential Revision: D6670230
Pulled By: miasantreble
fbshipit-source-id: ab848d3ed286bd081a13ee1903de357b56cbc308
Summary:
This is to avoid run time error. Fail the db_bench immediately if cuckoo table is used but mmap_read is not specified.
Closes https://github.com/facebook/rocksdb/pull/3420
Differential Revision: D6838284
Pulled By: siying
fbshipit-source-id: 20893fa28d40fadc31e4ff154bed02f5a1bad341
Summary: Grandfather in super old lint issues to make a clean slate for moving forward that allows us to have stronger enforcement on new issues.
Reviewed By: yiwu-arbug
Differential Revision: D6821806
fbshipit-source-id: 22797d31ec58e9eb0255d3b66fedfcfcb0dc127c
Summary:
It failed every time. I guess people usually ran with assertions disabled.
Closes https://github.com/facebook/rocksdb/pull/3422
Differential Revision: D6822984
Pulled By: ajkr
fbshipit-source-id: 2e90db75618b26ac1c46ddfa9e03c095c7bf16e3
Summary:
This replaces a vague warning about the mostly-obsolete ext3 filesystem with
a more detailed note about a historical bug in the still-relevant ext4.
Fixes#3410
Closes https://github.com/facebook/rocksdb/pull/3421
Differential Revision: D6834881
Pulled By: siying
fbshipit-source-id: 7771ef5c89a54c0ac17821680779c48178d0b400
Summary:
Allow StackableDB optionally takes a shared_ptr on construction and thus hold shared ownership of the underlying DB.
Closes https://github.com/facebook/rocksdb/pull/3423
Differential Revision: D6824163
Pulled By: yiwu-arbug
fbshipit-source-id: dbdc30c42e007533a987ef413785e192340f03eb
Summary:
FilePrefetchBuffer::Prefetch is currently rounds the offset up which does not fit its new use cases in prefetching index/filter blocks, as it would skips over some the offsets that were requested to be prefetched. This patch rounds down instead.
Fixes#3180
Closes https://github.com/facebook/rocksdb/pull/3413
Differential Revision: D6816392
Pulled By: maysamyabandeh
fbshipit-source-id: 3aaeaf59c55d72b61dacfae6d4a8e65eccb3c553
Summary:
While writing tests for dynamic Universal Compaction options, I found that the compaction reasons we set for size-ratio based and sorted-run based universal compactions are swapped with each other. Fixed it.
Closes https://github.com/facebook/rocksdb/pull/3412
Differential Revision: D6820540
Pulled By: sagar0
fbshipit-source-id: 270a188968ba25b2c96a8339904416c4c87ff5b3
Summary:
This change improves the performance of iterators doing long range scans (e.g. big/full table scans in MyRocks) by using readahead and prefetching additional data on each disk IO. This prefetching is automatically enabled on noticing more than 2 IOs for the same table file during iteration. The readahead size starts with 8KB and is exponentially increased on each additional sequential IO, up to a max of 256 KB. This helps in cutting down the number of IOs needed to complete the range scan.
Constraints:
- The prefetched data is stored by the OS in page cache. So this currently works only for non direct-reads use-cases i.e applications which use page cache. (Direct-I/O support will be enabled in a later PR).
- This gets currently enabled only when ReadOptions.readahead_size = 0 (which is the default value).
Thanks to siying for the original idea and implementation.
**Benchmarks:**
Data fill:
```
TEST_TMPDIR=/data/users/$USER/benchmarks/iter ./db_bench -benchmarks=fillrandom -num=1000000000 -compression_type="none" -level_compaction_dynamic_level_bytes
```
Do a long range scan: Seekrandom with large number of nexts
```
TEST_TMPDIR=/data/users/$USER/benchmarks/iter ./db_bench -benchmarks=seekrandom -duration=60 -num=1000000000 -use_existing_db -seek_nexts=10000 -statistics -histogram
```
Page cache was cleared before each experiment with the command:
```
sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
```
```
Before:
seekrandom : 34020.945 micros/op 29 ops/sec; 32.5 MB/s (1636 of 1999 found)
With this change:
seekrandom : 8726.912 micros/op 114 ops/sec; 126.8 MB/s (5702 of 6999 found)
```
~3.9X performance improvement.
Also verified with strace and gdb that the readahead size is increasing as expected.
```
strace -e readahead -f -T -t -p <db_bench process pid>
```
Closes https://github.com/facebook/rocksdb/pull/3282
Differential Revision: D6586477
Pulled By: sagar0
fbshipit-source-id: 8a118a0ed4594fbb7f5b1cafb242d7a4033cb58c
Summary:
The default changed in 6a14f7a976 but this comment was not updated.
Closes https://github.com/facebook/rocksdb/pull/3409
Differential Revision: D6808264
Pulled By: maysamyabandeh
fbshipit-source-id: 0d7e2a054eb181e9a144fcb783cf0b2c77219bc0
Summary:
This is a pre-cleaning up before a major block based table iterator refactoring. BlockBasedTable::NewDataBlockIterator() will always return BlockIter. This simplifies the logic and code and enable further refactoring and optimization.
Closes https://github.com/facebook/rocksdb/pull/3398
Differential Revision: D6780165
Pulled By: siying
fbshipit-source-id: 273f7dc896724f682c0118fb69a359d9cc4418b4
Summary:
In DBIter, Prev() calls FindValueForCurrentKey() to search the current value backward. If it finds that there are too many stale value being skipped, it falls back to FindValueForCurrentKeyUsingSeek(), seeking directly to the key with snapshot sequence. After introducing read_callback, however, the key it seeks to might not be visible, according to read_callback. It thus needs to keep searching forward until the first visible value.
Closes https://github.com/facebook/rocksdb/pull/3382
Differential Revision: D6756148
Pulled By: yiwu-arbug
fbshipit-source-id: 064e39b1eec5e083af1c10142600f26d1d2697be
Summary:
To prevent user who implement the Iterator interface fail to implement SeekForPrev by mistake.
Closes https://github.com/facebook/rocksdb/pull/3402
Differential Revision: D6790681
Pulled By: yiwu-arbug
fbshipit-source-id: bd75b8ced30208982e0a1414d34384d93496827a
Summary:
In ColumnFamilySet destructor, assert it hold the last reference to cfd before destroy them.
Closes#3112
Closes https://github.com/facebook/rocksdb/pull/3397
Differential Revision: D6777967
Pulled By: yiwu-arbug
fbshipit-source-id: 60b19070e0c194b3b6146699140c1d68777866cb
Summary:
FIFO and Universal compaction options were recently made dynamic, but I forgot to update these comments. These would mislead anyone who is reading the code.
Closes https://github.com/facebook/rocksdb/pull/3399
Differential Revision: D6786358
Pulled By: sagar0
fbshipit-source-id: 57cfc412f63deaee29bbd82b863304821d60057d
Summary:
Now in leveled compaction, we allocate solely based on output target file size. If the total input size is smaller than the number, we should use the total input size instead. Also, cap the allocate size to 1GB.
Closes https://github.com/facebook/rocksdb/pull/3385
Differential Revision: D6762363
Pulled By: siying
fbshipit-source-id: e30906f6e9bff3ec847d2166e44cb49c92f98a13
Summary:
FreeBSD uses jemalloc as the base malloc implementation.
The patch has been functional on FreeBSD as of the MariaDB 10.2 port.
Closes https://github.com/facebook/rocksdb/pull/3386
Differential Revision: D6765742
Pulled By: yiwu-arbug
fbshipit-source-id: d55dbc082eecf640ef3df9a21f26064ebe6587e8
Summary:
When blob_files is empty, std::min_element will return blobfiles.end(), which cannot be dereference. Fixing it.
Closes https://github.com/facebook/rocksdb/pull/3387
Differential Revision: D6764927
Pulled By: yiwu-arbug
fbshipit-source-id: 86f78700132be95760d35ac63480dfd3a8bbe17a
Summary:
Flush() call could be waiting indefinitely if min_write_buffer_number_to_merge is used. Consider the sequence:
1. User call Flush() with flush_options.wait = true
2. The manual flush started in the background
3. New memtable become immutable because of writes. The new memtable will not trigger flush if min_write_buffer_number_to_merge is not reached.
4. The manual flush finish.
Because of the new memtable created at step 3 not being flush, previous logic of WaitForFlushMemTable() keep waiting, despite the memtables it intent to flush has been flushed.
Here instead of checking if there are any more memtables to flush, WaitForFlushMemTable() also check the id of the earliest memtable. If the id is larger than that of latest memtable at the time flush was initiated, it means all the memtable at the time of flush start has all been flush.
Closes https://github.com/facebook/rocksdb/pull/3378
Differential Revision: D6746789
Pulled By: yiwu-arbug
fbshipit-source-id: 35e698f71c7f90b06337a93e6825f4ea3b619bfa