You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
rocksdb/utilities/transactions/optimistic_transaction_db_i...

113 lines
3.9 KiB

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#include "utilities/transactions/optimistic_transaction_db_impl.h"
#include <string>
#include <vector>
#include "db/db_impl/db_impl.h"
#include "rocksdb/db.h"
#include "rocksdb/options.h"
#include "rocksdb/utilities/optimistic_transaction_db.h"
#include "utilities/transactions/optimistic_transaction.h"
namespace ROCKSDB_NAMESPACE {
Improve memory efficiency of many OptimisticTransactionDBs (#11439) Summary: Currently it's easy to use a ton of memory with many small OptimisticTransactionDB instances, because each one by default allocates a million mutexes (40 bytes each on my compiler) for validating transactions. It even puts a lot of pressure on the allocator by allocating each one individually! In this change: * Create a new object and option that enables sharing these buckets of mutexes between instances. This is generally good for load balancing potential contention as various DBs become hotter or colder with txn writes. About the only cases where this sharing wouldn't make sense (e.g. each DB usually written by one thread) are cases that would be better off with OccValidationPolicy::kValidateSerial which doesn't use the buckets anyway. * Allocate the mutexes in a contiguous array, for efficiency * Add an option to ensure the mutexes are cache-aligned. In several other places we use cache-aligned mutexes but OptimisticTransactionDB historically does not. It should be a space-time trade-off the user can choose. * Provide some visibility into the memory used by the mutex buckets with an ApproximateMemoryUsage() function (also used in unit testing) * Share code with other users of "striped" mutexes, appropriate refactoring for customization & efficiency (e.g. using FastRange instead of modulus) Pull Request resolved: https://github.com/facebook/rocksdb/pull/11439 Test Plan: unit tests added. Ran sized-up versions of stress test in unit test, including a before-and-after performance test showing no consistent difference. (NOTE: OptimisticTransactionDB not currently covered by db_stress!) Reviewed By: ltamasi Differential Revision: D45796393 Pulled By: pdillinger fbshipit-source-id: ae2b3a26ad91ceeec15debcdc63ff48df6736a54
2 years ago
std::shared_ptr<OccLockBuckets> MakeSharedOccLockBuckets(size_t bucket_count,
bool cache_aligned) {
if (cache_aligned) {
return std::make_shared<OccLockBucketsImpl<true>>(bucket_count);
} else {
return std::make_shared<OccLockBucketsImpl<false>>(bucket_count);
}
}
Transaction* OptimisticTransactionDBImpl::BeginTransaction(
const WriteOptions& write_options,
const OptimisticTransactionOptions& txn_options, Transaction* old_txn) {
if (old_txn != nullptr) {
ReinitializeTransaction(old_txn, write_options, txn_options);
return old_txn;
} else {
return new OptimisticTransaction(this, write_options, txn_options);
}
}
Status OptimisticTransactionDB::Open(const Options& options,
const std::string& dbname,
OptimisticTransactionDB** dbptr) {
DBOptions db_options(options);
ColumnFamilyOptions cf_options(options);
std::vector<ColumnFamilyDescriptor> column_families;
column_families.push_back(
ColumnFamilyDescriptor(kDefaultColumnFamilyName, cf_options));
std::vector<ColumnFamilyHandle*> handles;
Status s = Open(db_options, dbname, column_families, &handles, dbptr);
if (s.ok()) {
assert(handles.size() == 1);
// i can delete the handle since DBImpl is always holding a reference to
// default column family
delete handles[0];
}
return s;
}
Status OptimisticTransactionDB::Open(
const DBOptions& db_options, const std::string& dbname,
const std::vector<ColumnFamilyDescriptor>& column_families,
std::vector<ColumnFamilyHandle*>* handles,
OptimisticTransactionDB** dbptr) {
return OptimisticTransactionDB::Open(db_options,
OptimisticTransactionDBOptions(), dbname,
column_families, handles, dbptr);
}
Status OptimisticTransactionDB::Open(
const DBOptions& db_options,
const OptimisticTransactionDBOptions& occ_options,
const std::string& dbname,
const std::vector<ColumnFamilyDescriptor>& column_families,
std::vector<ColumnFamilyHandle*>* handles,
OptimisticTransactionDB** dbptr) {
Status s;
DB* db;
std::vector<ColumnFamilyDescriptor> column_families_copy = column_families;
// Enable MemTable History if not already enabled
for (auto& column_family : column_families_copy) {
ColumnFamilyOptions* options = &column_family.options;
Refactor trimming logic for immutable memtables (#5022) Summary: MyRocks currently sets `max_write_buffer_number_to_maintain` in order to maintain enough history for transaction conflict checking. The effectiveness of this approach depends on the size of memtables. When memtables are small, it may not keep enough history; when memtables are large, this may consume too much memory. We are proposing a new way to configure memtable list history: by limiting the memory usage of immutable memtables. The new option is `max_write_buffer_size_to_maintain` and it will take precedence over the old `max_write_buffer_number_to_maintain` if they are both set to non-zero values. The new option accounts for the total memory usage of flushed immutable memtables and mutable memtable. When the total usage exceeds the limit, RocksDB may start dropping immutable memtables (which is also called trimming history), starting from the oldest one. The semantics of the old option actually works both as an upper bound and lower bound. History trimming will start if number of immutable memtables exceeds the limit, but it will never go below (limit-1) due to history trimming. In order the mimic the behavior with the new option, history trimming will stop if dropping the next immutable memtable causes the total memory usage go below the size limit. For example, assuming the size limit is set to 64MB, and there are 3 immutable memtables with sizes of 20, 30, 30. Although the total memory usage is 80MB > 64MB, dropping the oldest memtable will reduce the memory usage to 60MB < 64MB, so in this case no memtable will be dropped. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5022 Differential Revision: D14394062 Pulled By: miasantreble fbshipit-source-id: 60457a509c6af89d0993f988c9b5c2aa9e45f5c5
5 years ago
if (options->max_write_buffer_size_to_maintain == 0 &&
options->max_write_buffer_number_to_maintain == 0) {
// Setting to -1 will set the History size to
// max_write_buffer_number * write_buffer_size.
options->max_write_buffer_size_to_maintain = -1;
}
}
s = DB::Open(db_options, dbname, column_families_copy, handles, &db);
if (s.ok()) {
*dbptr = new OptimisticTransactionDBImpl(db, occ_options);
}
return s;
}
void OptimisticTransactionDBImpl::ReinitializeTransaction(
Transaction* txn, const WriteOptions& write_options,
const OptimisticTransactionOptions& txn_options) {
assert(dynamic_cast<OptimisticTransaction*>(txn) != nullptr);
auto txn_impl = reinterpret_cast<OptimisticTransaction*>(txn);
txn_impl->Reinitialize(this, write_options, txn_options);
}
} // namespace ROCKSDB_NAMESPACE