Eliminate unnecessary (slow) block cache Ref()ing in MultiGet (#9899)
Summary: When MultiGet() determines that multiple query keys can be served by examining the same data block in block cache (one Lookup()), each PinnableSlice referring to data in that data block needs to hold on to the block in cache so that they can be released at arbitrary times by the API user. Historically this is accomplished with extra calls to Ref() on the Handle from Lookup(), with each PinnableSlice cleanup calling Release() on the Handle, but this creates extra contention on the block cache for the extra Ref()s and Release()es, especially because they hit the same cache shard repeatedly. In the case of merge operands (possibly more cases?), the problem was compounded by doing an extra Ref()+eventual Release() for each merge operand for a key reusing a block (which could be the same key!), rather than one Ref() per key. (Note: the non-shared case with `biter` was already one per key.) This change optimizes MultiGet not to rely on these extra, contentious Ref()+Release() calls by instead, in the shared block case, wrapping the cache Release() cleanup in a refcounted object referenced by the PinnableSlices, such that after the last wrapped reference is released, the cache entry is Release()ed. Relaxed atomic refcounts should be much faster than mutex-guarded Ref() and Release(), and much less prone to a performance cliff when MultiGet() does a lot of block sharing. Note that I did not use std::shared_ptr, because that would require an extra indirection object (shared_ptr itself new/delete) in order to associate a ref increment/decrement with a Cleanable cleanup entry. (If I assumed it was the size of two pointers, I could do some hackery to make it work without the extra indirection, but that's too fragile.) Some details: * Fixed (removed) extra block cache tracing entries in cases of cache entry reuse in MultiGet, but it's likely that in some other cases traces are missing (XXX comment inserted) * Moved existing implementations for cleanable.h from iterator.cc to new cleanable.cc * Improved API comments on Cleanable * Added a public SharedCleanablePtr class to cleanable.h in case others could benefit from the same pattern (potentially many Cleanables and/or smart pointers referencing a shared Cleanable) * Add a typedef for MultiGetContext::Mask * Some variable renaming for clarity Pull Request resolved: https://github.com/facebook/rocksdb/pull/9899 Test Plan: Added unit tests for SharedCleanablePtr. Greatly enhanced ability of existing tests to detect cache use-after-free. * Release PinnableSlices from MultiGet as they are read rather than in bulk (in db_test_util wrapper). * In ASAN build, default to using a trivially small LRUCache for block_cache so that entries are immediately erased when unreferenced. (Updated two tests that depend on caching.) New ASAN testsuite running time seems OK to me. If I introduce a bug into my implementation where we skip the shared cleanups on block reuse, ASAN detects the bug in `db_basic_test *MultiGet*`. If I remove either of the above testing enhancements, the bug is not detected. Consider for follow-up work: manipulate or randomize ordering of PinnableSlice use and release from MultiGet db_test_util wrapper. But in typical cases, natural ordering gives pretty good functional coverage. Performance test: In the extreme (but possible) case of MultiGetting the same or adjacent keys in a batch, throughput can improve by an order of magnitude. `./db_bench -benchmarks=multireadrandom -db=/dev/shm/testdb -readonly -num=5 -duration=10 -threads=20 -multiread_batched -batch_size=200` Before ops/sec, num=5: 1,384,394 Before ops/sec, num=500: 6,423,720 After ops/sec, num=500: 10,658,794 After ops/sec, num=5: 16,027,257 Also note that previously, with high parallelism, having query keys concentrated in a single block was worse than spreading them out a bit. Now concentrated in a single block is faster than spread out, which is hopefully consistent with natural expectation. Random query performance: with num=1000000, over 999 x 10s runs running before & after simultaneously (each -threads=12): Before: multireadrandom [AVG 999 runs] : 1088699 (± 7344) ops/sec; 120.4 (± 0.8 ) MB/sec After: multireadrandom [AVG 999 runs] : 1090402 (± 7230) ops/sec; 120.6 (± 0.8 ) MB/sec Possibly better, possibly in the noise. Reviewed By: anand1976 Differential Revision: D35907003 Pulled By: pdillinger fbshipit-source-id: bbd244d703649a8ca12d476f2d03853ed9d1a17emain
parent
ce2d8a4239
commit
9d0cae7104
@ -0,0 +1,180 @@ |
|||||||
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
||||||
|
// This source code is licensed under both the GPLv2 (found in the
|
||||||
|
// COPYING file in the root directory) and Apache 2.0 License
|
||||||
|
// (found in the LICENSE.Apache file in the root directory).
|
||||||
|
//
|
||||||
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a BSD-style license that can be
|
||||||
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
||||||
|
|
||||||
|
#include "rocksdb/cleanable.h" |
||||||
|
|
||||||
|
#include <atomic> |
||||||
|
#include <cassert> |
||||||
|
|
||||||
|
namespace ROCKSDB_NAMESPACE { |
||||||
|
|
||||||
|
Cleanable::Cleanable() { |
||||||
|
cleanup_.function = nullptr; |
||||||
|
cleanup_.next = nullptr; |
||||||
|
} |
||||||
|
|
||||||
|
Cleanable::~Cleanable() { DoCleanup(); } |
||||||
|
|
||||||
|
Cleanable::Cleanable(Cleanable&& other) noexcept { *this = std::move(other); } |
||||||
|
|
||||||
|
Cleanable& Cleanable::operator=(Cleanable&& other) noexcept { |
||||||
|
assert(this != &other); // https://stackoverflow.com/a/9322542/454544
|
||||||
|
cleanup_ = other.cleanup_; |
||||||
|
other.cleanup_.function = nullptr; |
||||||
|
other.cleanup_.next = nullptr; |
||||||
|
return *this; |
||||||
|
} |
||||||
|
|
||||||
|
// If the entire linked list was on heap we could have simply add attach one
|
||||||
|
// link list to another. However the head is an embeded object to avoid the cost
|
||||||
|
// of creating objects for most of the use cases when the Cleanable has only one
|
||||||
|
// Cleanup to do. We could put evernything on heap if benchmarks show no
|
||||||
|
// negative impact on performance.
|
||||||
|
// Also we need to iterate on the linked list since there is no pointer to the
|
||||||
|
// tail. We can add the tail pointer but maintainin it might negatively impact
|
||||||
|
// the perforamnce for the common case of one cleanup where tail pointer is not
|
||||||
|
// needed. Again benchmarks could clarify that.
|
||||||
|
// Even without a tail pointer we could iterate on the list, find the tail, and
|
||||||
|
// have only that node updated without the need to insert the Cleanups one by
|
||||||
|
// one. This however would be redundant when the source Cleanable has one or a
|
||||||
|
// few Cleanups which is the case most of the time.
|
||||||
|
// TODO(myabandeh): if the list is too long we should maintain a tail pointer
|
||||||
|
// and have the entire list (minus the head that has to be inserted separately)
|
||||||
|
// merged with the target linked list at once.
|
||||||
|
void Cleanable::DelegateCleanupsTo(Cleanable* other) { |
||||||
|
assert(other != nullptr); |
||||||
|
if (cleanup_.function == nullptr) { |
||||||
|
return; |
||||||
|
} |
||||||
|
Cleanup* c = &cleanup_; |
||||||
|
other->RegisterCleanup(c->function, c->arg1, c->arg2); |
||||||
|
c = c->next; |
||||||
|
while (c != nullptr) { |
||||||
|
Cleanup* next = c->next; |
||||||
|
other->RegisterCleanup(c); |
||||||
|
c = next; |
||||||
|
} |
||||||
|
cleanup_.function = nullptr; |
||||||
|
cleanup_.next = nullptr; |
||||||
|
} |
||||||
|
|
||||||
|
void Cleanable::RegisterCleanup(Cleanable::Cleanup* c) { |
||||||
|
assert(c != nullptr); |
||||||
|
if (cleanup_.function == nullptr) { |
||||||
|
cleanup_.function = c->function; |
||||||
|
cleanup_.arg1 = c->arg1; |
||||||
|
cleanup_.arg2 = c->arg2; |
||||||
|
delete c; |
||||||
|
} else { |
||||||
|
c->next = cleanup_.next; |
||||||
|
cleanup_.next = c; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
void Cleanable::RegisterCleanup(CleanupFunction func, void* arg1, void* arg2) { |
||||||
|
assert(func != nullptr); |
||||||
|
Cleanup* c; |
||||||
|
if (cleanup_.function == nullptr) { |
||||||
|
c = &cleanup_; |
||||||
|
} else { |
||||||
|
c = new Cleanup; |
||||||
|
c->next = cleanup_.next; |
||||||
|
cleanup_.next = c; |
||||||
|
} |
||||||
|
c->function = func; |
||||||
|
c->arg1 = arg1; |
||||||
|
c->arg2 = arg2; |
||||||
|
} |
||||||
|
|
||||||
|
struct SharedCleanablePtr::Impl : public Cleanable { |
||||||
|
std::atomic<unsigned> ref_count{1}; // Start with 1 ref
|
||||||
|
void Ref() { ref_count.fetch_add(1, std::memory_order_relaxed); } |
||||||
|
void Unref() { |
||||||
|
if (ref_count.fetch_sub(1, std::memory_order_relaxed) == 1) { |
||||||
|
// Last ref
|
||||||
|
delete this; |
||||||
|
} |
||||||
|
} |
||||||
|
static void UnrefWrapper(void* arg1, void* /*arg2*/) { |
||||||
|
static_cast<SharedCleanablePtr::Impl*>(arg1)->Unref(); |
||||||
|
} |
||||||
|
}; |
||||||
|
|
||||||
|
void SharedCleanablePtr::Reset() { |
||||||
|
if (ptr_) { |
||||||
|
ptr_->Unref(); |
||||||
|
ptr_ = nullptr; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
void SharedCleanablePtr::Allocate() { |
||||||
|
Reset(); |
||||||
|
ptr_ = new Impl(); |
||||||
|
} |
||||||
|
|
||||||
|
SharedCleanablePtr::SharedCleanablePtr(const SharedCleanablePtr& from) { |
||||||
|
*this = from; |
||||||
|
} |
||||||
|
|
||||||
|
SharedCleanablePtr::SharedCleanablePtr(SharedCleanablePtr&& from) noexcept { |
||||||
|
*this = std::move(from); |
||||||
|
} |
||||||
|
|
||||||
|
SharedCleanablePtr& SharedCleanablePtr::operator=( |
||||||
|
const SharedCleanablePtr& from) { |
||||||
|
if (this != &from) { |
||||||
|
Reset(); |
||||||
|
ptr_ = from.ptr_; |
||||||
|
if (ptr_) { |
||||||
|
ptr_->Ref(); |
||||||
|
} |
||||||
|
} |
||||||
|
return *this; |
||||||
|
} |
||||||
|
|
||||||
|
SharedCleanablePtr& SharedCleanablePtr::operator=( |
||||||
|
SharedCleanablePtr&& from) noexcept { |
||||||
|
assert(this != &from); // https://stackoverflow.com/a/9322542/454544
|
||||||
|
Reset(); |
||||||
|
ptr_ = from.ptr_; |
||||||
|
from.ptr_ = nullptr; |
||||||
|
return *this; |
||||||
|
} |
||||||
|
|
||||||
|
SharedCleanablePtr::~SharedCleanablePtr() { Reset(); } |
||||||
|
|
||||||
|
Cleanable& SharedCleanablePtr::operator*() { |
||||||
|
return *ptr_; // implicit upcast
|
||||||
|
} |
||||||
|
|
||||||
|
Cleanable* SharedCleanablePtr::operator->() { |
||||||
|
return ptr_; // implicit upcast
|
||||||
|
} |
||||||
|
|
||||||
|
Cleanable* SharedCleanablePtr::get() { |
||||||
|
return ptr_; // implicit upcast
|
||||||
|
} |
||||||
|
|
||||||
|
void SharedCleanablePtr::RegisterCopyWith(Cleanable* target) { |
||||||
|
if (ptr_) { |
||||||
|
// "Virtual" copy of the pointer
|
||||||
|
ptr_->Ref(); |
||||||
|
target->RegisterCleanup(&Impl::UnrefWrapper, ptr_, nullptr); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
void SharedCleanablePtr::MoveAsCleanupTo(Cleanable* target) { |
||||||
|
if (ptr_) { |
||||||
|
// "Virtual" move of the pointer
|
||||||
|
target->RegisterCleanup(&Impl::UnrefWrapper, ptr_, nullptr); |
||||||
|
ptr_ = nullptr; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
} // namespace ROCKSDB_NAMESPACE
|
Loading…
Reference in new issue