Do not put blobs read during compaction into cache (#10457)

Summary:
During compaction, blobs are currently read using the default
`ReadOptions`, which has the `fill_cache` flag set to true. Earlier,
this didn't make any difference since we didn't have a blob cache;
however, now we have to explicitly set this flag to false to avoid
polluting the cache during compaction.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10457

Test Plan: `make check`

Reviewed By: riversand963

Differential Revision: D38333528

Pulled By: ltamasi

fbshipit-source-id: 5b4d49a1e39543bee73c7df2aa9194fb101875e2
main
Levi Tamasi 2 years ago committed by Facebook GitHub Bot
parent fbfcf5cbcd
commit cc8ded6152
  1. 1
      HISTORY.md
  2. 36
      db/blob/db_blob_compaction_test.cc
  3. 5
      db/compaction/compaction_iterator.cc

@ -18,6 +18,7 @@
* Fix a bug in `FIFOCompactionPicker::PickTTLCompaction` where total_size calculating might cause underflow * Fix a bug in `FIFOCompactionPicker::PickTTLCompaction` where total_size calculating might cause underflow
* Fix data race bug in hash linked list memtable. With this bug, read request might temporarily miss an old record in the memtable in a race condition to the hash bucket. * Fix data race bug in hash linked list memtable. With this bug, read request might temporarily miss an old record in the memtable in a race condition to the hash bucket.
* Fix a bug that `best_efforts_recovery` may fail to open the db with mmap read. * Fix a bug that `best_efforts_recovery` may fail to open the db with mmap read.
* Fixed a bug where blobs read during compaction would pollute the cache.
### Behavior Change ### Behavior Change
* Added checksum handshake during the copying of decompressed WAL fragment. This together with #9875, #10037, #10212, #10114 and #10319 provides end-to-end integrity protection for write batch during recovery. * Added checksum handshake during the copying of decompressed WAL fragment. This together with #9875, #10037, #10212, #10114 and #10319 provides end-to-end integrity protection for write batch during recovery.

@ -798,6 +798,42 @@ TEST_F(DBBlobCompactionTest, CompactionReadaheadMerge) {
Close(); Close();
} }
TEST_F(DBBlobCompactionTest, CompactionDoNotFillCache) {
Options options = GetDefaultOptions();
options.enable_blob_files = true;
options.min_blob_size = 0;
options.enable_blob_garbage_collection = true;
options.blob_garbage_collection_age_cutoff = 1.0;
options.disable_auto_compactions = true;
options.statistics = CreateDBStatistics();
LRUCacheOptions cache_options;
cache_options.capacity = 1 << 20;
cache_options.metadata_charge_policy = kDontChargeCacheMetadata;
options.blob_cache = NewLRUCache(cache_options);
Reopen(options);
ASSERT_OK(Put("key", "lime"));
ASSERT_OK(Put("foo", "bar"));
ASSERT_OK(Flush());
ASSERT_OK(Put("key", "pie"));
ASSERT_OK(Put("foo", "baz"));
ASSERT_OK(Flush());
constexpr Slice* begin = nullptr;
constexpr Slice* end = nullptr;
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), begin, end));
ASSERT_EQ(options.statistics->getTickerCount(BLOB_DB_CACHE_ADD), 0);
Close();
}
} // namespace ROCKSDB_NAMESPACE } // namespace ROCKSDB_NAMESPACE
int main(int argc, char** argv) { int main(int argc, char** argv) {

@ -1284,7 +1284,10 @@ std::unique_ptr<BlobFetcher> CompactionIterator::CreateBlobFetcherIfNeeded(
return nullptr; return nullptr;
} }
return std::unique_ptr<BlobFetcher>(new BlobFetcher(version, ReadOptions())); ReadOptions read_options;
read_options.fill_cache = false;
return std::unique_ptr<BlobFetcher>(new BlobFetcher(version, read_options));
} }
std::unique_ptr<PrefetchBufferCollection> std::unique_ptr<PrefetchBufferCollection>

Loading…
Cancel
Save