HyperClockCache support for SecondaryCache, with refactoring (#11301)
Summary:
Internally refactors SecondaryCache integration out of LRUCache specifically and into a wrapper/adapter class that works with various Cache implementations. Notably, this relies on separating the notion of async lookup handles from other cache handles, so that HyperClockCache doesn't have to deal with the problem of allocating handles from the hash table for lookups that might fail anyway, and might be on the same key without support for coalescing. (LRUCache's hash table can incorporate previously allocated handles thanks to its pointer indirection.) Specifically, I'm worried about the case in which hundreds of threads try to access the same block and probing in the hash table degrades to linear search on the pile of entries with the same key.
This change is a big step in the direction of supporting stacked SecondaryCaches, but there are obstacles to completing that. Especially, there is no SecondaryCache hook for evictions to pass from one to the next. It has been proposed that evictions be transmitted simply as the persisted data (as in SaveToCallback), but given the current structure provided by the CacheItemHelpers, that would require an extra copy of the block data, because there's intentionally no way to ask for a contiguous Slice of the data (to allow for flexibility in storage). `AsyncLookupHandle` and the re-worked `WaitAll()` should be essentially prepared for stacked SecondaryCaches, but several "TODO with stacked secondaries" issues remain in various places.
It could be argued that the stacking instead be done as a SecondaryCache adapter that wraps two (or more) SecondaryCaches, but at least with the current API that would require an extra heap allocation on SecondaryCache Lookup for a wrapper SecondaryCacheResultHandle that can transfer a Lookup between secondaries. We could also consider trying to unify the Cache and SecondaryCache APIs, though that might be difficult if `AsyncLookupHandle` is kept a fixed struct.
## cache.h (public API)
Moves `secondary_cache` option from LRUCacheOptions to ShardedCacheOptions so that it is applicable to HyperClockCache.
## advanced_cache.h (advanced public API)
* Add `Cache::CreateStandalone()` so that the SecondaryCache support wrapper can use it.
* Add `SetEvictionCallback()` / `eviction_callback_` so that the SecondaryCache support wrapper can use it. Only a single callback is supported for efficiency. If there is ever a need for more than one, hopefully that can be handled with a broadcast callback wrapper.
These are essentially the two "extra" pieces of `Cache` for pulling out specific SecondaryCache support from the `Cache` implementation. I think it's a good trade-off as these are reasonable, limited, and reusable "cut points" into the `Cache` implementations.
* Remove async capability from standard `Lookup()` (getting rid of awkward restrictions on pending Handles) and add `AsyncLookupHandle` and `StartAsyncLookup()`. As noted in the comments, the full struct of `AsyncLookupHandle` is exposed so that it can be stack allocated, for efficiency, though more data is being copied around than before, which could impact performance. (Lookup info -> AsyncLookupHandle -> Handle vs. Lookup info -> Handle)
I could foresee a future in which a Cache internally saves a pointer to the AsyncLookupHandle, which means it's dangerous to allow it to be copyable or even movable. It also means it's not compatible with std::vector (which I don't like requiring as an API parameter anyway), so `WaitAll()` expects any contiguous array of AsyncLookupHandles. I believe this is best for common case efficiency, while behaving well in other cases also. For example, `WaitAll()` has no effect on default-constructed AsyncLookupHandles, which look like a completed cache miss.
## cacheable_entry.h
A couple of functions are obsolete because Cache::Handle can no longer be pending.
## cache.cc
Provides default implementations for new or revamped Cache functions, especially appropriate for non-blocking caches.
## secondary_cache_adapter.{h,cc}
The full details of the Cache wrapper adding SecondaryCache support. Essentially replicates the SecondaryCache handling that was in LRUCache, but obviously refactored. There is a bit of logic duplication, where Lookup() is essentially a manually optimized version of StartAsyncLookup() and Wait(), but it's roughly a dozen lines of code.
## sharded_cache.h, typed_cache.h, charged_cache.{h,cc}, sim_cache.cc
Simply updated for Cache API changes.
## lru_cache.{h,cc}
Carefully remove SecondaryCache logic, implement `CreateStandalone` and eviction handler functionality.
## clock_cache.{h,cc}
Expose existing `CreateStandalone` functionality, add eviction handler functionality. Light refactoring.
## block_based_table_reader*
Mostly re-worked the only usage of async Lookup, which is in BlockBasedTable::MultiGet. Used arrays in place of autovector in some places for efficiency. Simplified some logic by not trying to process some cache results before they're all ready.
Created new function `BlockBasedTable::GetCachePriority()` to reduce some pre-existing code duplication (and avoid making it worse).
Fixed at least one small bug from the prior confusing mixture of async and sync Lookups. In MaybeReadBlockAndLoadToCache(), called by RetrieveBlock(), called by MultiGet() with wait=false, is_cache_hit for the block_cache_tracer entry would not be set to true if the handle was pending after Lookup and before Wait.
## Intended follow-up work
* Figure out if there are any missing stats or block_cache_tracer work in refactored BlockBasedTable::MultiGet
* Stacked secondary caches (see above discussion)
* See if we can make up for the small MultiGet performance regression.
* Study more performance with SecondaryCache
* Items evicted from over-full LRUCache in Release were not being demoted to SecondaryCache, and still aren't to minimize unit test churn. Ideally they would be demoted, but it's an exceptional case so not a big deal.
* Use CreateStandalone for cache reservations (save unnecessary hash table operations). Not a big deal, but worthy cleanup.
* Somehow I got the contract for SecondaryCache::Insert wrong in #10945. (Doesn't take ownership!) That API comment needs to be fixed, but didn't want to mingle that in here.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11301
Test Plan:
## Unit tests
Generally updated to include HCC in SecondaryCache tests, though HyperClockCache has some different, less strict behaviors that leads to some tests not really being set up to work with it. Some of the tests remain disabled with it, but I think we have good coverage without them.
## Crash/stress test
Updated to use the new combination.
## Performance
First, let's check for regression on caches without secondary cache configured. Adding support for the eviction callback is likely to have a tiny effect, but it shouldn't be worrisome. LRUCache could benefit slightly from less logic around SecondaryCache handling. We can test with cache_bench default settings, built with DEBUG_LEVEL=0 and PORTABLE=0.
```
(while :; do base/cache_bench --cache_type=hyper_clock_cache | grep Rough; done) | awk '{ sum += $9; count++; print $0; print "Average: " int(sum / count) }'
```
**Before** this and #11299 (which could also have a small effect), running for about an hour, before & after running concurrently for each cache type:
HyperClockCache: 3168662 (average parallel ops/sec)
LRUCache: 2940127
**After** this and #11299, running for about an hour:
HyperClockCache: 3164862 (average parallel ops/sec) (0.12% slower)
LRUCache: 2940928 (0.03% faster)
This is an acceptable difference IMHO.
Next, let's consider essentially the worst case of new CPU overhead affecting overall performance. MultiGet uses the async lookup interface regardless of whether SecondaryCache or folly are used. We can configure a benchmark where all block cache queries are for data blocks, and all are hits.
Create DB and test (before and after tests running simultaneously):
```
TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=30000000 -disable_wal=1 -bloom_bits=16
TEST_TMPDIR=/dev/shm base/db_bench -benchmarks=multireadrandom[-X30] -readonly -multiread_batched -batch_size=32 -num=30000000 -bloom_bits=16 -cache_size=6789000000 -duration 20 -threads=16
```
**Before**:
multireadrandom [AVG 30 runs] : 3444202 (± 57049) ops/sec; 240.9 (± 4.0) MB/sec
multireadrandom [MEDIAN 30 runs] : 3514443 ops/sec; 245.8 MB/sec
**After**:
multireadrandom [AVG 30 runs] : 3291022 (± 58851) ops/sec; 230.2 (± 4.1) MB/sec
multireadrandom [MEDIAN 30 runs] : 3366179 ops/sec; 235.4 MB/sec
So that's roughly a 3% regression, on kind of a *worst case* test of MultiGet CPU. Similar story with HyperClockCache:
**Before**:
multireadrandom [AVG 30 runs] : 3933777 (± 41840) ops/sec; 275.1 (± 2.9) MB/sec
multireadrandom [MEDIAN 30 runs] : 3970667 ops/sec; 277.7 MB/sec
**After**:
multireadrandom [AVG 30 runs] : 3755338 (± 30391) ops/sec; 262.6 (± 2.1) MB/sec
multireadrandom [MEDIAN 30 runs] : 3785696 ops/sec; 264.8 MB/sec
Roughly a 4-5% regression. Not ideal, but not the whole story, fortunately.
Let's also look at Get() in db_bench:
```
TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=readrandom[-X30] -readonly -num=30000000 -bloom_bits=16 -cache_size=6789000000 -duration 20 -threads=16
```
**Before**:
readrandom [AVG 30 runs] : 2198685 (± 13412) ops/sec; 153.8 (± 0.9) MB/sec
readrandom [MEDIAN 30 runs] : 2209498 ops/sec; 154.5 MB/sec
**After**:
readrandom [AVG 30 runs] : 2292814 (± 43508) ops/sec; 160.3 (± 3.0) MB/sec
readrandom [MEDIAN 30 runs] : 2365181 ops/sec; 165.4 MB/sec
That's showing roughly a 4% improvement, perhaps because of the secondary cache code that is no longer part of LRUCache. But weirdly, HyperClockCache is also showing 2-3% improvement:
**Before**:
readrandom [AVG 30 runs] : 2272333 (± 9992) ops/sec; 158.9 (± 0.7) MB/sec
readrandom [MEDIAN 30 runs] : 2273239 ops/sec; 159.0 MB/sec
**After**:
readrandom [AVG 30 runs] : 2332407 (± 11252) ops/sec; 163.1 (± 0.8) MB/sec
readrandom [MEDIAN 30 runs] : 2335329 ops/sec; 163.3 MB/sec
Reviewed By: ltamasi
Differential Revision: D44177044
Pulled By: pdillinger
fbshipit-source-id: e808e48ff3fe2f792a79841ba617be98e48689f5
2 years ago
|
|
|
// Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
|
|
|
|
#include "cache/secondary_cache_adapter.h"
|
|
|
|
|
|
|
|
#include "monitoring/perf_context_imp.h"
|
|
|
|
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
|
|
|
|
namespace {
|
|
|
|
// A distinct pointer value for marking "dummy" cache entries
|
|
|
|
struct Dummy {
|
|
|
|
char val[7] = "kDummy";
|
|
|
|
};
|
|
|
|
const Dummy kDummy{};
|
|
|
|
Cache::ObjectPtr const kDummyObj = const_cast<Dummy*>(&kDummy);
|
|
|
|
} // namespace
|
|
|
|
|
|
|
|
CacheWithSecondaryAdapter::CacheWithSecondaryAdapter(
|
|
|
|
std::shared_ptr<Cache> target,
|
|
|
|
std::shared_ptr<SecondaryCache> secondary_cache)
|
|
|
|
: CacheWrapper(std::move(target)),
|
|
|
|
secondary_cache_(std::move(secondary_cache)) {
|
|
|
|
target_->SetEvictionCallback([this](const Slice& key, Handle* handle) {
|
|
|
|
return EvictionHandler(key, handle);
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
|
|
|
CacheWithSecondaryAdapter::~CacheWithSecondaryAdapter() {
|
|
|
|
// `*this` will be destroyed before `*target_`, so we have to prevent
|
|
|
|
// use after free
|
|
|
|
target_->SetEvictionCallback({});
|
|
|
|
}
|
|
|
|
|
|
|
|
bool CacheWithSecondaryAdapter::EvictionHandler(const Slice& key,
|
|
|
|
Handle* handle) {
|
|
|
|
auto helper = GetCacheItemHelper(handle);
|
|
|
|
if (helper->IsSecondaryCacheCompatible()) {
|
|
|
|
auto obj = target_->Value(handle);
|
|
|
|
// Ignore dummy entry
|
|
|
|
if (obj != kDummyObj) {
|
|
|
|
// Spill into secondary cache.
|
|
|
|
secondary_cache_->Insert(key, obj, helper).PermitUncheckedError();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// Never takes ownership of obj
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool CacheWithSecondaryAdapter::ProcessDummyResult(Cache::Handle** handle,
|
|
|
|
bool erase) {
|
|
|
|
if (*handle && target_->Value(*handle) == kDummyObj) {
|
|
|
|
target_->Release(*handle, erase);
|
|
|
|
*handle = nullptr;
|
|
|
|
return true;
|
|
|
|
} else {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void CacheWithSecondaryAdapter::CleanupCacheObject(
|
|
|
|
ObjectPtr obj, const CacheItemHelper* helper) {
|
|
|
|
if (helper->del_cb) {
|
|
|
|
helper->del_cb(obj, memory_allocator());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
Cache::Handle* CacheWithSecondaryAdapter::Promote(
|
|
|
|
std::unique_ptr<SecondaryCacheResultHandle>&& secondary_handle,
|
|
|
|
const Slice& key, const CacheItemHelper* helper, Priority priority,
|
|
|
|
Statistics* stats, bool found_dummy_entry, bool kept_in_sec_cache) {
|
|
|
|
assert(secondary_handle->IsReady());
|
|
|
|
|
|
|
|
ObjectPtr obj = secondary_handle->Value();
|
|
|
|
if (!obj) {
|
|
|
|
// Nothing found.
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
// Found something.
|
|
|
|
switch (helper->role) {
|
|
|
|
case CacheEntryRole::kFilterBlock:
|
|
|
|
RecordTick(stats, SECONDARY_CACHE_FILTER_HITS);
|
|
|
|
break;
|
|
|
|
case CacheEntryRole::kIndexBlock:
|
|
|
|
RecordTick(stats, SECONDARY_CACHE_INDEX_HITS);
|
|
|
|
break;
|
|
|
|
case CacheEntryRole::kDataBlock:
|
|
|
|
RecordTick(stats, SECONDARY_CACHE_DATA_HITS);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
PERF_COUNTER_ADD(secondary_cache_hit_count, 1);
|
|
|
|
RecordTick(stats, SECONDARY_CACHE_HITS);
|
|
|
|
|
|
|
|
// Note: SecondaryCache::Size() is really charge (from the CreateCallback)
|
HyperClockCache support for SecondaryCache, with refactoring (#11301)
Summary:
Internally refactors SecondaryCache integration out of LRUCache specifically and into a wrapper/adapter class that works with various Cache implementations. Notably, this relies on separating the notion of async lookup handles from other cache handles, so that HyperClockCache doesn't have to deal with the problem of allocating handles from the hash table for lookups that might fail anyway, and might be on the same key without support for coalescing. (LRUCache's hash table can incorporate previously allocated handles thanks to its pointer indirection.) Specifically, I'm worried about the case in which hundreds of threads try to access the same block and probing in the hash table degrades to linear search on the pile of entries with the same key.
This change is a big step in the direction of supporting stacked SecondaryCaches, but there are obstacles to completing that. Especially, there is no SecondaryCache hook for evictions to pass from one to the next. It has been proposed that evictions be transmitted simply as the persisted data (as in SaveToCallback), but given the current structure provided by the CacheItemHelpers, that would require an extra copy of the block data, because there's intentionally no way to ask for a contiguous Slice of the data (to allow for flexibility in storage). `AsyncLookupHandle` and the re-worked `WaitAll()` should be essentially prepared for stacked SecondaryCaches, but several "TODO with stacked secondaries" issues remain in various places.
It could be argued that the stacking instead be done as a SecondaryCache adapter that wraps two (or more) SecondaryCaches, but at least with the current API that would require an extra heap allocation on SecondaryCache Lookup for a wrapper SecondaryCacheResultHandle that can transfer a Lookup between secondaries. We could also consider trying to unify the Cache and SecondaryCache APIs, though that might be difficult if `AsyncLookupHandle` is kept a fixed struct.
## cache.h (public API)
Moves `secondary_cache` option from LRUCacheOptions to ShardedCacheOptions so that it is applicable to HyperClockCache.
## advanced_cache.h (advanced public API)
* Add `Cache::CreateStandalone()` so that the SecondaryCache support wrapper can use it.
* Add `SetEvictionCallback()` / `eviction_callback_` so that the SecondaryCache support wrapper can use it. Only a single callback is supported for efficiency. If there is ever a need for more than one, hopefully that can be handled with a broadcast callback wrapper.
These are essentially the two "extra" pieces of `Cache` for pulling out specific SecondaryCache support from the `Cache` implementation. I think it's a good trade-off as these are reasonable, limited, and reusable "cut points" into the `Cache` implementations.
* Remove async capability from standard `Lookup()` (getting rid of awkward restrictions on pending Handles) and add `AsyncLookupHandle` and `StartAsyncLookup()`. As noted in the comments, the full struct of `AsyncLookupHandle` is exposed so that it can be stack allocated, for efficiency, though more data is being copied around than before, which could impact performance. (Lookup info -> AsyncLookupHandle -> Handle vs. Lookup info -> Handle)
I could foresee a future in which a Cache internally saves a pointer to the AsyncLookupHandle, which means it's dangerous to allow it to be copyable or even movable. It also means it's not compatible with std::vector (which I don't like requiring as an API parameter anyway), so `WaitAll()` expects any contiguous array of AsyncLookupHandles. I believe this is best for common case efficiency, while behaving well in other cases also. For example, `WaitAll()` has no effect on default-constructed AsyncLookupHandles, which look like a completed cache miss.
## cacheable_entry.h
A couple of functions are obsolete because Cache::Handle can no longer be pending.
## cache.cc
Provides default implementations for new or revamped Cache functions, especially appropriate for non-blocking caches.
## secondary_cache_adapter.{h,cc}
The full details of the Cache wrapper adding SecondaryCache support. Essentially replicates the SecondaryCache handling that was in LRUCache, but obviously refactored. There is a bit of logic duplication, where Lookup() is essentially a manually optimized version of StartAsyncLookup() and Wait(), but it's roughly a dozen lines of code.
## sharded_cache.h, typed_cache.h, charged_cache.{h,cc}, sim_cache.cc
Simply updated for Cache API changes.
## lru_cache.{h,cc}
Carefully remove SecondaryCache logic, implement `CreateStandalone` and eviction handler functionality.
## clock_cache.{h,cc}
Expose existing `CreateStandalone` functionality, add eviction handler functionality. Light refactoring.
## block_based_table_reader*
Mostly re-worked the only usage of async Lookup, which is in BlockBasedTable::MultiGet. Used arrays in place of autovector in some places for efficiency. Simplified some logic by not trying to process some cache results before they're all ready.
Created new function `BlockBasedTable::GetCachePriority()` to reduce some pre-existing code duplication (and avoid making it worse).
Fixed at least one small bug from the prior confusing mixture of async and sync Lookups. In MaybeReadBlockAndLoadToCache(), called by RetrieveBlock(), called by MultiGet() with wait=false, is_cache_hit for the block_cache_tracer entry would not be set to true if the handle was pending after Lookup and before Wait.
## Intended follow-up work
* Figure out if there are any missing stats or block_cache_tracer work in refactored BlockBasedTable::MultiGet
* Stacked secondary caches (see above discussion)
* See if we can make up for the small MultiGet performance regression.
* Study more performance with SecondaryCache
* Items evicted from over-full LRUCache in Release were not being demoted to SecondaryCache, and still aren't to minimize unit test churn. Ideally they would be demoted, but it's an exceptional case so not a big deal.
* Use CreateStandalone for cache reservations (save unnecessary hash table operations). Not a big deal, but worthy cleanup.
* Somehow I got the contract for SecondaryCache::Insert wrong in #10945. (Doesn't take ownership!) That API comment needs to be fixed, but didn't want to mingle that in here.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11301
Test Plan:
## Unit tests
Generally updated to include HCC in SecondaryCache tests, though HyperClockCache has some different, less strict behaviors that leads to some tests not really being set up to work with it. Some of the tests remain disabled with it, but I think we have good coverage without them.
## Crash/stress test
Updated to use the new combination.
## Performance
First, let's check for regression on caches without secondary cache configured. Adding support for the eviction callback is likely to have a tiny effect, but it shouldn't be worrisome. LRUCache could benefit slightly from less logic around SecondaryCache handling. We can test with cache_bench default settings, built with DEBUG_LEVEL=0 and PORTABLE=0.
```
(while :; do base/cache_bench --cache_type=hyper_clock_cache | grep Rough; done) | awk '{ sum += $9; count++; print $0; print "Average: " int(sum / count) }'
```
**Before** this and #11299 (which could also have a small effect), running for about an hour, before & after running concurrently for each cache type:
HyperClockCache: 3168662 (average parallel ops/sec)
LRUCache: 2940127
**After** this and #11299, running for about an hour:
HyperClockCache: 3164862 (average parallel ops/sec) (0.12% slower)
LRUCache: 2940928 (0.03% faster)
This is an acceptable difference IMHO.
Next, let's consider essentially the worst case of new CPU overhead affecting overall performance. MultiGet uses the async lookup interface regardless of whether SecondaryCache or folly are used. We can configure a benchmark where all block cache queries are for data blocks, and all are hits.
Create DB and test (before and after tests running simultaneously):
```
TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=30000000 -disable_wal=1 -bloom_bits=16
TEST_TMPDIR=/dev/shm base/db_bench -benchmarks=multireadrandom[-X30] -readonly -multiread_batched -batch_size=32 -num=30000000 -bloom_bits=16 -cache_size=6789000000 -duration 20 -threads=16
```
**Before**:
multireadrandom [AVG 30 runs] : 3444202 (± 57049) ops/sec; 240.9 (± 4.0) MB/sec
multireadrandom [MEDIAN 30 runs] : 3514443 ops/sec; 245.8 MB/sec
**After**:
multireadrandom [AVG 30 runs] : 3291022 (± 58851) ops/sec; 230.2 (± 4.1) MB/sec
multireadrandom [MEDIAN 30 runs] : 3366179 ops/sec; 235.4 MB/sec
So that's roughly a 3% regression, on kind of a *worst case* test of MultiGet CPU. Similar story with HyperClockCache:
**Before**:
multireadrandom [AVG 30 runs] : 3933777 (± 41840) ops/sec; 275.1 (± 2.9) MB/sec
multireadrandom [MEDIAN 30 runs] : 3970667 ops/sec; 277.7 MB/sec
**After**:
multireadrandom [AVG 30 runs] : 3755338 (± 30391) ops/sec; 262.6 (± 2.1) MB/sec
multireadrandom [MEDIAN 30 runs] : 3785696 ops/sec; 264.8 MB/sec
Roughly a 4-5% regression. Not ideal, but not the whole story, fortunately.
Let's also look at Get() in db_bench:
```
TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=readrandom[-X30] -readonly -num=30000000 -bloom_bits=16 -cache_size=6789000000 -duration 20 -threads=16
```
**Before**:
readrandom [AVG 30 runs] : 2198685 (± 13412) ops/sec; 153.8 (± 0.9) MB/sec
readrandom [MEDIAN 30 runs] : 2209498 ops/sec; 154.5 MB/sec
**After**:
readrandom [AVG 30 runs] : 2292814 (± 43508) ops/sec; 160.3 (± 3.0) MB/sec
readrandom [MEDIAN 30 runs] : 2365181 ops/sec; 165.4 MB/sec
That's showing roughly a 4% improvement, perhaps because of the secondary cache code that is no longer part of LRUCache. But weirdly, HyperClockCache is also showing 2-3% improvement:
**Before**:
readrandom [AVG 30 runs] : 2272333 (± 9992) ops/sec; 158.9 (± 0.7) MB/sec
readrandom [MEDIAN 30 runs] : 2273239 ops/sec; 159.0 MB/sec
**After**:
readrandom [AVG 30 runs] : 2332407 (± 11252) ops/sec; 163.1 (± 0.8) MB/sec
readrandom [MEDIAN 30 runs] : 2335329 ops/sec; 163.3 MB/sec
Reviewed By: ltamasi
Differential Revision: D44177044
Pulled By: pdillinger
fbshipit-source-id: e808e48ff3fe2f792a79841ba617be98e48689f5
2 years ago
|
|
|
size_t charge = secondary_handle->Size();
|
|
|
|
Handle* result = nullptr;
|
|
|
|
// Insert into primary cache, possibly as a standalone+dummy entries.
|
|
|
|
if (secondary_cache_->SupportForceErase() && !found_dummy_entry) {
|
|
|
|
// Create standalone and insert dummy
|
|
|
|
// Allow standalone to be created even if cache is full, to avoid
|
|
|
|
// reading the entry from storage.
|
|
|
|
result =
|
|
|
|
CreateStandalone(key, obj, helper, charge, /*allow_uncharged*/ true);
|
|
|
|
assert(result);
|
|
|
|
PERF_COUNTER_ADD(block_cache_standalone_handle_count, 1);
|
|
|
|
|
|
|
|
// Insert dummy to record recent use
|
|
|
|
// TODO: try to avoid case where inserting this dummy could overwrite a
|
|
|
|
// regular entry
|
|
|
|
Status s = Insert(key, kDummyObj, &kNoopCacheItemHelper, /*charge=*/0,
|
|
|
|
/*handle=*/nullptr, priority);
|
|
|
|
s.PermitUncheckedError();
|
|
|
|
// Nothing to do or clean up on dummy insertion failure
|
|
|
|
} else {
|
|
|
|
// Insert regular entry into primary cache.
|
|
|
|
// Don't allow it to spill into secondary cache again if it was kept there.
|
|
|
|
Status s = Insert(
|
|
|
|
key, obj, kept_in_sec_cache ? helper->without_secondary_compat : helper,
|
|
|
|
charge, &result, priority);
|
|
|
|
if (s.ok()) {
|
|
|
|
assert(result);
|
|
|
|
PERF_COUNTER_ADD(block_cache_real_handle_count, 1);
|
|
|
|
} else {
|
|
|
|
// Create standalone result instead, even if cache is full, to avoid
|
|
|
|
// reading the entry from storage.
|
|
|
|
result =
|
|
|
|
CreateStandalone(key, obj, helper, charge, /*allow_uncharged*/ true);
|
|
|
|
assert(result);
|
|
|
|
PERF_COUNTER_ADD(block_cache_standalone_handle_count, 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
Cache::Handle* CacheWithSecondaryAdapter::Lookup(const Slice& key,
|
|
|
|
const CacheItemHelper* helper,
|
|
|
|
CreateContext* create_context,
|
|
|
|
Priority priority,
|
|
|
|
Statistics* stats) {
|
|
|
|
// NOTE: we could just StartAsyncLookup() and Wait(), but this should be a bit
|
|
|
|
// more efficient
|
|
|
|
Handle* result =
|
|
|
|
target_->Lookup(key, helper, create_context, priority, stats);
|
|
|
|
bool secondary_compatible = helper && helper->IsSecondaryCacheCompatible();
|
|
|
|
bool found_dummy_entry =
|
|
|
|
ProcessDummyResult(&result, /*erase=*/secondary_compatible);
|
|
|
|
if (!result && secondary_compatible) {
|
|
|
|
// Try our secondary cache
|
|
|
|
bool kept_in_sec_cache = false;
|
|
|
|
std::unique_ptr<SecondaryCacheResultHandle> secondary_handle =
|
|
|
|
secondary_cache_->Lookup(key, helper, create_context, /*wait*/ true,
|
|
|
|
found_dummy_entry, /*out*/ kept_in_sec_cache);
|
|
|
|
if (secondary_handle) {
|
|
|
|
result = Promote(std::move(secondary_handle), key, helper, priority,
|
|
|
|
stats, found_dummy_entry, kept_in_sec_cache);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
Cache::ObjectPtr CacheWithSecondaryAdapter::Value(Handle* handle) {
|
|
|
|
ObjectPtr v = target_->Value(handle);
|
|
|
|
// TODO with stacked secondaries: might fail in EvictionHandler
|
|
|
|
assert(v != kDummyObj);
|
|
|
|
return v;
|
|
|
|
}
|
|
|
|
|
|
|
|
void CacheWithSecondaryAdapter::StartAsyncLookupOnMySecondary(
|
|
|
|
AsyncLookupHandle& async_handle) {
|
|
|
|
assert(!async_handle.IsPending());
|
|
|
|
assert(async_handle.result_handle == nullptr);
|
|
|
|
|
|
|
|
std::unique_ptr<SecondaryCacheResultHandle> secondary_handle =
|
|
|
|
secondary_cache_->Lookup(async_handle.key, async_handle.helper,
|
|
|
|
async_handle.create_context, /*wait*/ false,
|
|
|
|
async_handle.found_dummy_entry,
|
|
|
|
/*out*/ async_handle.kept_in_sec_cache);
|
|
|
|
if (secondary_handle) {
|
|
|
|
// TODO with stacked secondaries: Check & process if already ready?
|
|
|
|
async_handle.pending_handle = secondary_handle.release();
|
|
|
|
async_handle.pending_cache = secondary_cache_.get();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void CacheWithSecondaryAdapter::StartAsyncLookup(
|
|
|
|
AsyncLookupHandle& async_handle) {
|
|
|
|
target_->StartAsyncLookup(async_handle);
|
|
|
|
if (!async_handle.IsPending()) {
|
|
|
|
bool secondary_compatible =
|
|
|
|
async_handle.helper &&
|
|
|
|
async_handle.helper->IsSecondaryCacheCompatible();
|
|
|
|
async_handle.found_dummy_entry |= ProcessDummyResult(
|
|
|
|
&async_handle.result_handle, /*erase=*/secondary_compatible);
|
|
|
|
|
|
|
|
if (async_handle.Result() == nullptr && secondary_compatible) {
|
|
|
|
// Not found and not pending on another secondary cache
|
|
|
|
StartAsyncLookupOnMySecondary(async_handle);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void CacheWithSecondaryAdapter::WaitAll(AsyncLookupHandle* async_handles,
|
|
|
|
size_t count) {
|
|
|
|
if (count == 0) {
|
|
|
|
// Nothing to do
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// Requests that are pending on *my* secondary cache, at the start of this
|
|
|
|
// function
|
|
|
|
std::vector<AsyncLookupHandle*> my_pending;
|
|
|
|
// Requests that are pending on an "inner" secondary cache (managed somewhere
|
|
|
|
// under target_), as of the start of this function
|
|
|
|
std::vector<AsyncLookupHandle*> inner_pending;
|
|
|
|
|
|
|
|
// Initial accounting of pending handles, excluding those already handled
|
|
|
|
// by "outer" secondary caches. (See cur->pending_cache = nullptr.)
|
|
|
|
for (size_t i = 0; i < count; ++i) {
|
|
|
|
AsyncLookupHandle* cur = async_handles + i;
|
|
|
|
if (cur->pending_cache) {
|
|
|
|
assert(cur->IsPending());
|
|
|
|
assert(cur->helper);
|
|
|
|
assert(cur->helper->IsSecondaryCacheCompatible());
|
|
|
|
if (cur->pending_cache == secondary_cache_.get()) {
|
|
|
|
my_pending.push_back(cur);
|
|
|
|
// Mark as "to be handled by this caller"
|
|
|
|
cur->pending_cache = nullptr;
|
|
|
|
} else {
|
|
|
|
// Remember as potentially needing a lookup in my secondary
|
|
|
|
inner_pending.push_back(cur);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Wait on inner-most cache lookups first
|
|
|
|
// TODO with stacked secondaries: because we are not using proper
|
|
|
|
// async/await constructs here yet, there is a false synchronization point
|
|
|
|
// here where all the results at one level are needed before initiating
|
|
|
|
// any lookups at the next level. Probably not a big deal, but worth noting.
|
|
|
|
if (!inner_pending.empty()) {
|
|
|
|
target_->WaitAll(async_handles, count);
|
|
|
|
}
|
|
|
|
|
|
|
|
// For those that failed to find something, convert to lookup in my
|
|
|
|
// secondary cache.
|
|
|
|
for (AsyncLookupHandle* cur : inner_pending) {
|
|
|
|
if (cur->Result() == nullptr) {
|
|
|
|
// Not found, try my secondary
|
|
|
|
StartAsyncLookupOnMySecondary(*cur);
|
|
|
|
if (cur->IsPending()) {
|
|
|
|
assert(cur->pending_cache == secondary_cache_.get());
|
|
|
|
my_pending.push_back(cur);
|
|
|
|
// Mark as "to be handled by this caller"
|
|
|
|
cur->pending_cache = nullptr;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Wait on all lookups on my secondary cache
|
|
|
|
{
|
|
|
|
std::vector<SecondaryCacheResultHandle*> my_secondary_handles;
|
|
|
|
for (AsyncLookupHandle* cur : my_pending) {
|
|
|
|
my_secondary_handles.push_back(cur->pending_handle);
|
|
|
|
}
|
|
|
|
secondary_cache_->WaitAll(my_secondary_handles);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Process results
|
|
|
|
for (AsyncLookupHandle* cur : my_pending) {
|
|
|
|
std::unique_ptr<SecondaryCacheResultHandle> secondary_handle(
|
|
|
|
cur->pending_handle);
|
|
|
|
cur->pending_handle = nullptr;
|
|
|
|
cur->result_handle = Promote(
|
|
|
|
std::move(secondary_handle), cur->key, cur->helper, cur->priority,
|
|
|
|
cur->stats, cur->found_dummy_entry, cur->kept_in_sec_cache);
|
|
|
|
assert(cur->pending_cache == nullptr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
std::string CacheWithSecondaryAdapter::GetPrintableOptions() const {
|
|
|
|
std::string str = target_->GetPrintableOptions();
|
|
|
|
str.append(" secondary_cache:\n");
|
|
|
|
str.append(secondary_cache_->GetPrintableOptions());
|
|
|
|
return str;
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* CacheWithSecondaryAdapter::Name() const {
|
|
|
|
// To the user, at least for now, configure the underlying cache with
|
|
|
|
// a secondary cache. So we pretend to be that cache
|
|
|
|
return target_->Name();
|
|
|
|
}
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|