You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
rocksdb/table/block_based/data_block_hash_index_test.cc

719 lines
23 KiB

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#include "table/block_based/data_block_hash_index.h"
#include <cstdlib>
#include <string>
#include <unordered_map>
Use only "local" range tombstones during Get (#4449) Summary: Previously, range tombstones were accumulated from every level, which was necessary if a range tombstone in a higher level covered a key in a lower level. However, RangeDelAggregator::AddTombstones's complexity is based on the number of tombstones that are currently stored in it, which is wasteful in the Get case, where we only need to know the highest sequence number of range tombstones that cover the key from higher levels, and compute the highest covering sequence number at the current level. This change introduces this optimization, and removes the use of RangeDelAggregator from the Get path. In the benchmark results, the following command was used to initialize the database: ``` ./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8 ``` ...and the following command was used to measure read throughput: ``` ./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32 ``` The filluniquerandom command was only run once, and the resulting database was used to measure read performance before and after the PR. Both binaries were compiled with `DEBUG_LEVEL=0`. Readrandom results before PR: ``` readrandom : 4.544 micros/op 220090 ops/sec; 16.9 MB/s (63103 of 100000 found) ``` Readrandom results after PR: ``` readrandom : 11.147 micros/op 89707 ops/sec; 6.9 MB/s (63103 of 100000 found) ``` So it's actually slower right now, but this PR paves the way for future optimizations (see #4493). ---- Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449 Differential Revision: D10370575 Pulled By: abhimadan fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d
6 years ago
#include "db/table_properties_collector.h"
#include "rocksdb/slice.h"
#include "table/block_based/block.h"
#include "table/block_based/block_based_table_reader.h"
#include "table/block_based/block_builder.h"
#include "table/get_context.h"
Use only "local" range tombstones during Get (#4449) Summary: Previously, range tombstones were accumulated from every level, which was necessary if a range tombstone in a higher level covered a key in a lower level. However, RangeDelAggregator::AddTombstones's complexity is based on the number of tombstones that are currently stored in it, which is wasteful in the Get case, where we only need to know the highest sequence number of range tombstones that cover the key from higher levels, and compute the highest covering sequence number at the current level. This change introduces this optimization, and removes the use of RangeDelAggregator from the Get path. In the benchmark results, the following command was used to initialize the database: ``` ./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8 ``` ...and the following command was used to measure read throughput: ``` ./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32 ``` The filluniquerandom command was only run once, and the resulting database was used to measure read performance before and after the PR. Both binaries were compiled with `DEBUG_LEVEL=0`. Readrandom results before PR: ``` readrandom : 4.544 micros/op 220090 ops/sec; 16.9 MB/s (63103 of 100000 found) ``` Readrandom results after PR: ``` readrandom : 11.147 micros/op 89707 ops/sec; 6.9 MB/s (63103 of 100000 found) ``` So it's actually slower right now, but this PR paves the way for future optimizations (see #4493). ---- Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449 Differential Revision: D10370575 Pulled By: abhimadan fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d
6 years ago
#include "table/table_builder.h"
#include "test_util/testharness.h"
#include "test_util/testutil.h"
#include "util/random.h"
namespace ROCKSDB_NAMESPACE {
bool SearchForOffset(DataBlockHashIndex& index, const char* data,
uint16_t map_offset, const Slice& key,
uint8_t& restart_point) {
uint8_t entry = index.Lookup(data, map_offset, key);
if (entry == kCollision) {
return true;
}
if (entry == kNoEntry) {
return false;
}
return entry == restart_point;
}
std::string GenerateKey(int primary_key, int secondary_key, int padding_size,
Random* rnd) {
char buf[50];
char* p = &buf[0];
snprintf(buf, sizeof(buf), "%6d%4d", primary_key, secondary_key);
std::string k(p);
if (padding_size) {
k += rnd->RandomString(padding_size);
}
return k;
}
// Generate random key value pairs.
// The generated key will be sorted. You can tune the parameters to generated
// different kinds of test key/value pairs for different scenario.
void GenerateRandomKVs(std::vector<std::string>* keys,
std::vector<std::string>* values, const int from,
const int len, const int step = 1,
const int padding_size = 0,
const int keys_share_prefix = 1) {
Random rnd(302);
// generate different prefix
for (int i = from; i < from + len; i += step) {
// generating keys that shares the prefix
for (int j = 0; j < keys_share_prefix; ++j) {
keys->emplace_back(GenerateKey(i, j, padding_size, &rnd));
// 100 bytes values
values->emplace_back(rnd.RandomString(100));
}
}
}
TEST(DataBlockHashIndex, DataBlockHashTestSmall) {
DataBlockHashIndexBuilder builder;
builder.Initialize(0.75 /*util_ratio*/);
for (int j = 0; j < 5; j++) {
for (uint8_t i = 0; i < 2 + j; i++) {
std::string key("key" + std::to_string(i));
uint8_t restart_point = i;
builder.Add(key, restart_point);
}
size_t estimated_size = builder.EstimateSize();
std::string buffer("fake"), buffer2;
size_t original_size = buffer.size();
estimated_size += original_size;
builder.Finish(buffer);
ASSERT_EQ(buffer.size(), estimated_size);
buffer2 = buffer; // test for the correctness of relative offset
Slice s(buffer2);
DataBlockHashIndex index;
uint16_t map_offset;
index.Initialize(s.data(), static_cast<uint16_t>(s.size()), &map_offset);
// the additional hash map should start at the end of the buffer
ASSERT_EQ(original_size, map_offset);
for (uint8_t i = 0; i < 2; i++) {
std::string key("key" + std::to_string(i));
uint8_t restart_point = i;
ASSERT_TRUE(
SearchForOffset(index, s.data(), map_offset, key, restart_point));
}
builder.Reset();
}
}
TEST(DataBlockHashIndex, DataBlockHashTest) {
// bucket_num = 200, #keys = 100. 50% utilization
DataBlockHashIndexBuilder builder;
builder.Initialize(0.75 /*util_ratio*/);
for (uint8_t i = 0; i < 100; i++) {
std::string key("key" + std::to_string(i));
uint8_t restart_point = i;
builder.Add(key, restart_point);
}
size_t estimated_size = builder.EstimateSize();
std::string buffer("fake content"), buffer2;
size_t original_size = buffer.size();
estimated_size += original_size;
builder.Finish(buffer);
ASSERT_EQ(buffer.size(), estimated_size);
buffer2 = buffer; // test for the correctness of relative offset
Slice s(buffer2);
DataBlockHashIndex index;
uint16_t map_offset;
index.Initialize(s.data(), static_cast<uint16_t>(s.size()), &map_offset);
// the additional hash map should start at the end of the buffer
ASSERT_EQ(original_size, map_offset);
for (uint8_t i = 0; i < 100; i++) {
std::string key("key" + std::to_string(i));
uint8_t restart_point = i;
ASSERT_TRUE(
SearchForOffset(index, s.data(), map_offset, key, restart_point));
}
}
TEST(DataBlockHashIndex, DataBlockHashTestCollision) {
// bucket_num = 2. There will be intense hash collisions
DataBlockHashIndexBuilder builder;
builder.Initialize(0.75 /*util_ratio*/);
for (uint8_t i = 0; i < 100; i++) {
std::string key("key" + std::to_string(i));
uint8_t restart_point = i;
builder.Add(key, restart_point);
}
size_t estimated_size = builder.EstimateSize();
std::string buffer("some other fake content to take up space"), buffer2;
size_t original_size = buffer.size();
estimated_size += original_size;
builder.Finish(buffer);
ASSERT_EQ(buffer.size(), estimated_size);
buffer2 = buffer; // test for the correctness of relative offset
Slice s(buffer2);
DataBlockHashIndex index;
uint16_t map_offset;
index.Initialize(s.data(), static_cast<uint16_t>(s.size()), &map_offset);
// the additional hash map should start at the end of the buffer
ASSERT_EQ(original_size, map_offset);
for (uint8_t i = 0; i < 100; i++) {
std::string key("key" + std::to_string(i));
uint8_t restart_point = i;
ASSERT_TRUE(
SearchForOffset(index, s.data(), map_offset, key, restart_point));
}
}
TEST(DataBlockHashIndex, DataBlockHashTestLarge) {
DataBlockHashIndexBuilder builder;
builder.Initialize(0.75 /*util_ratio*/);
std::unordered_map<std::string, uint8_t> m;
for (uint8_t i = 0; i < 100; i++) {
if (i % 2) {
continue; // leave half of the keys out
}
std::string key = "key" + std::to_string(i);
uint8_t restart_point = i;
builder.Add(key, restart_point);
m[key] = restart_point;
}
size_t estimated_size = builder.EstimateSize();
std::string buffer("filling stuff"), buffer2;
size_t original_size = buffer.size();
estimated_size += original_size;
builder.Finish(buffer);
ASSERT_EQ(buffer.size(), estimated_size);
buffer2 = buffer; // test for the correctness of relative offset
Slice s(buffer2);
DataBlockHashIndex index;
uint16_t map_offset;
index.Initialize(s.data(), static_cast<uint16_t>(s.size()), &map_offset);
// the additional hash map should start at the end of the buffer
ASSERT_EQ(original_size, map_offset);
for (uint8_t i = 0; i < 100; i++) {
std::string key = "key" + std::to_string(i);
uint8_t restart_point = i;
if (m.count(key)) {
ASSERT_TRUE(m[key] == restart_point);
ASSERT_TRUE(
SearchForOffset(index, s.data(), map_offset, key, restart_point));
} else {
// we allow false positve, so don't test the nonexisting keys.
// when false positive happens, the search will continue to the
// restart intervals to see if the key really exist.
}
}
}
TEST(DataBlockHashIndex, RestartIndexExceedMax) {
DataBlockHashIndexBuilder builder;
builder.Initialize(0.75 /*util_ratio*/);
std::unordered_map<std::string, uint8_t> m;
for (uint8_t i = 0; i <= 253; i++) {
std::string key = "key" + std::to_string(i);
uint8_t restart_point = i;
builder.Add(key, restart_point);
}
ASSERT_TRUE(builder.Valid());
builder.Reset();
for (uint8_t i = 0; i <= 254; i++) {
std::string key = "key" + std::to_string(i);
uint8_t restart_point = i;
builder.Add(key, restart_point);
}
ASSERT_FALSE(builder.Valid());
builder.Reset();
ASSERT_TRUE(builder.Valid());
}
TEST(DataBlockHashIndex, BlockRestartIndexExceedMax) {
Options options = Options();
BlockBuilder builder(1 /* block_restart_interval */,
true /* use_delta_encoding */,
false /* use_value_delta_encoding */,
BlockBasedTableOptions::kDataBlockBinaryAndHash);
// #restarts <= 253. HashIndex is valid
for (int i = 0; i <= 253; i++) {
std::string ukey = "key" + std::to_string(i);
InternalKey ikey(ukey, 0, kTypeValue);
builder.Add(ikey.Encode().ToString(), "value");
}
{
// read serialized contents of the block
Slice rawblock = builder.Finish();
// create block reader
BlockContents contents;
contents.data = rawblock;
Block reader(std::move(contents));
ASSERT_EQ(reader.IndexType(),
BlockBasedTableOptions::kDataBlockBinaryAndHash);
}
builder.Reset();
// #restarts > 253. HashIndex is not used
for (int i = 0; i <= 254; i++) {
std::string ukey = "key" + std::to_string(i);
InternalKey ikey(ukey, 0, kTypeValue);
builder.Add(ikey.Encode().ToString(), "value");
}
{
// read serialized contents of the block
Slice rawblock = builder.Finish();
// create block reader
BlockContents contents;
contents.data = rawblock;
Block reader(std::move(contents));
ASSERT_EQ(reader.IndexType(),
BlockBasedTableOptions::kDataBlockBinarySearch);
}
}
TEST(DataBlockHashIndex, BlockSizeExceedMax) {
Options options = Options();
std::string ukey(10, 'k');
InternalKey ikey(ukey, 0, kTypeValue);
BlockBuilder builder(1 /* block_restart_interval */,
false /* use_delta_encoding */,
false /* use_value_delta_encoding */,
BlockBasedTableOptions::kDataBlockBinaryAndHash);
{
// insert a large value. The block size plus HashIndex is 65536.
std::string value(65502, 'v');
builder.Add(ikey.Encode().ToString(), value);
// read serialized contents of the block
Slice rawblock = builder.Finish();
ASSERT_LE(rawblock.size(), kMaxBlockSizeSupportedByHashIndex);
std::cerr << "block size: " << rawblock.size() << std::endl;
// create block reader
BlockContents contents;
contents.data = rawblock;
Block reader(std::move(contents));
ASSERT_EQ(reader.IndexType(),
BlockBasedTableOptions::kDataBlockBinaryAndHash);
}
builder.Reset();
{
// insert a large value. The block size plus HashIndex would be 65537.
// This excceed the max block size supported by HashIndex (65536).
// So when build finishes HashIndex will not be created for the block.
std::string value(65503, 'v');
builder.Add(ikey.Encode().ToString(), value);
// read serialized contents of the block
Slice rawblock = builder.Finish();
ASSERT_LE(rawblock.size(), kMaxBlockSizeSupportedByHashIndex);
std::cerr << "block size: " << rawblock.size() << std::endl;
// create block reader
BlockContents contents;
contents.data = rawblock;
Block reader(std::move(contents));
// the index type have fallen back to binary when build finish.
ASSERT_EQ(reader.IndexType(),
BlockBasedTableOptions::kDataBlockBinarySearch);
}
}
TEST(DataBlockHashIndex, BlockTestSingleKey) {
Options options = Options();
BlockBuilder builder(16 /* block_restart_interval */,
true /* use_delta_encoding */,
false /* use_value_delta_encoding */,
BlockBasedTableOptions::kDataBlockBinaryAndHash);
std::string ukey("gopher");
std::string value("gold");
InternalKey ikey(ukey, 10, kTypeValue);
builder.Add(ikey.Encode().ToString(), value /*value*/);
// read serialized contents of the block
Slice rawblock = builder.Finish();
// create block reader
BlockContents contents;
contents.data = rawblock;
Block reader(std::move(contents));
const InternalKeyComparator icmp(BytewiseComparator());
Separate internal and user key comparators in `BlockIter` (#6944) Summary: Replace `BlockIter::comparator_` and `IndexBlockIter::user_comparator_wrapper_` with a concrete `UserComparatorWrapper` and `InternalKeyComparator`. The motivation for this change was the inconvenience of not knowing the concrete type of `BlockIter::comparator_`, which prevented calling specialized internal key comparison functions to optimize comparison of keys with global seqno applied. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6944 Test Plan: benchmark setup -- single file DBs, in-memory, no compression. "normal_db" created by regular flush; "ingestion_db" created by ingesting a file. Both DBs have same contents. ``` $ TEST_TMPDIR=/dev/shm/normal_db/ ./db_bench -benchmarks=fillrandom,compact -write_buffer_size=10485760000 -disable_auto_compactions=true -compression_type=none -num=1000000 $ ./ldb write_extern_sst ./tmp.sst --db=/dev/shm/ingestion_db/dbbench/ --compression_type=no --hex --create_if_missing < <(./sst_dump --command=scan --output_hex --file=/dev/shm/normal_db/dbbench/000007.sst | awk 'began {print "0x" substr($1, 2, length($1) - 2), "==>", "0x" $5} ; /^Sst file format: block-based/ {began=1}') $ ./ldb ingest_extern_sst ./tmp.sst --db=/dev/shm/ingestion_db/dbbench/ ``` benchmark run command: ``` $ TEST_TMPDIR=/dev/shm/$DB/ ./db_bench -benchmarks=seekrandom -seek_nexts=$SEEK_NEXT -use_existing_db=true -cache_index_and_filter_blocks=false -num=1000000 -cache_size=0 -threads=1 -reads=200000000 -mmap_read=1 -verify_checksum=false ``` results: perf improved marginally for ingestion_db and did not change significantly for normal_db: SEEK_NEXT | DB | code | ops/sec | % change -- | -- | -- | -- | -- 0 | normal_db | master | 350880 |   0 | normal_db | PR6944 | 351040 | 0.0 0 | ingestion_db | master | 343255 |   0 | ingestion_db | PR6944 | 349424 | 1.8 10 | normal_db | master | 218711 |   10 | normal_db | PR6944 | 217892 | -0.4 10 | ingestion_db | master | 220334 |   10 | ingestion_db | PR6944 | 226437 | 2.8 Reviewed By: pdillinger Differential Revision: D21924676 Pulled By: ajkr fbshipit-source-id: ea4288a2eefa8112eb6c651a671c1de18c12e538
4 years ago
auto iter = reader.NewDataIterator(icmp.user_comparator(),
kDisableGlobalSequenceNumber);
bool may_exist;
// search in block for the key just inserted
{
InternalKey seek_ikey(ukey, 10, kValueTypeForSeek);
may_exist = iter->SeekForGet(seek_ikey.Encode().ToString());
ASSERT_TRUE(may_exist);
ASSERT_TRUE(iter->Valid());
ASSERT_EQ(
options.comparator->Compare(iter->key(), ikey.Encode().ToString()), 0);
ASSERT_EQ(iter->value(), value);
}
// search in block for the existing ukey, but with higher seqno
{
InternalKey seek_ikey(ukey, 20, kValueTypeForSeek);
// HashIndex should be able to set the iter correctly
may_exist = iter->SeekForGet(seek_ikey.Encode().ToString());
ASSERT_TRUE(may_exist);
ASSERT_TRUE(iter->Valid());
// user key should match
ASSERT_EQ(options.comparator->Compare(ExtractUserKey(iter->key()), ukey),
0);
// seek_key seqno number should be greater than that of iter result
ASSERT_GT(GetInternalKeySeqno(seek_ikey.Encode()),
GetInternalKeySeqno(iter->key()));
ASSERT_EQ(iter->value(), value);
}
// Search in block for the existing ukey, but with lower seqno
// in this case, hash can find the only occurrence of the user_key, but
// ParseNextDataKey() will skip it as it does not have a older seqno.
// In this case, GetForSeek() is effective to locate the user_key, and
// iter->Valid() == false indicates that we've reached to the end of
// the block and the caller should continue searching the next block.
{
InternalKey seek_ikey(ukey, 5, kValueTypeForSeek);
may_exist = iter->SeekForGet(seek_ikey.Encode().ToString());
ASSERT_TRUE(may_exist);
ASSERT_FALSE(iter->Valid()); // should have reached to the end of block
}
delete iter;
}
TEST(DataBlockHashIndex, BlockTestLarge) {
Random rnd(1019);
Options options = Options();
std::vector<std::string> keys;
std::vector<std::string> values;
BlockBuilder builder(16 /* block_restart_interval */,
true /* use_delta_encoding */,
false /* use_value_delta_encoding */,
BlockBasedTableOptions::kDataBlockBinaryAndHash);
int num_records = 500;
GenerateRandomKVs(&keys, &values, 0, num_records);
// Generate keys. Adding a trailing "1" to indicate existent keys.
// Later will Seeking for keys with a trailing "0" to test seeking
// non-existent keys.
for (int i = 0; i < num_records; i++) {
std::string ukey(keys[i] + "1" /* existing key marker */);
InternalKey ikey(ukey, 0, kTypeValue);
builder.Add(ikey.Encode().ToString(), values[i]);
}
// read serialized contents of the block
Slice rawblock = builder.Finish();
// create block reader
BlockContents contents;
contents.data = rawblock;
Block reader(std::move(contents));
const InternalKeyComparator icmp(BytewiseComparator());
// random seek existent keys
for (int i = 0; i < num_records; i++) {
Separate internal and user key comparators in `BlockIter` (#6944) Summary: Replace `BlockIter::comparator_` and `IndexBlockIter::user_comparator_wrapper_` with a concrete `UserComparatorWrapper` and `InternalKeyComparator`. The motivation for this change was the inconvenience of not knowing the concrete type of `BlockIter::comparator_`, which prevented calling specialized internal key comparison functions to optimize comparison of keys with global seqno applied. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6944 Test Plan: benchmark setup -- single file DBs, in-memory, no compression. "normal_db" created by regular flush; "ingestion_db" created by ingesting a file. Both DBs have same contents. ``` $ TEST_TMPDIR=/dev/shm/normal_db/ ./db_bench -benchmarks=fillrandom,compact -write_buffer_size=10485760000 -disable_auto_compactions=true -compression_type=none -num=1000000 $ ./ldb write_extern_sst ./tmp.sst --db=/dev/shm/ingestion_db/dbbench/ --compression_type=no --hex --create_if_missing < <(./sst_dump --command=scan --output_hex --file=/dev/shm/normal_db/dbbench/000007.sst | awk 'began {print "0x" substr($1, 2, length($1) - 2), "==>", "0x" $5} ; /^Sst file format: block-based/ {began=1}') $ ./ldb ingest_extern_sst ./tmp.sst --db=/dev/shm/ingestion_db/dbbench/ ``` benchmark run command: ``` $ TEST_TMPDIR=/dev/shm/$DB/ ./db_bench -benchmarks=seekrandom -seek_nexts=$SEEK_NEXT -use_existing_db=true -cache_index_and_filter_blocks=false -num=1000000 -cache_size=0 -threads=1 -reads=200000000 -mmap_read=1 -verify_checksum=false ``` results: perf improved marginally for ingestion_db and did not change significantly for normal_db: SEEK_NEXT | DB | code | ops/sec | % change -- | -- | -- | -- | -- 0 | normal_db | master | 350880 |   0 | normal_db | PR6944 | 351040 | 0.0 0 | ingestion_db | master | 343255 |   0 | ingestion_db | PR6944 | 349424 | 1.8 10 | normal_db | master | 218711 |   10 | normal_db | PR6944 | 217892 | -0.4 10 | ingestion_db | master | 220334 |   10 | ingestion_db | PR6944 | 226437 | 2.8 Reviewed By: pdillinger Differential Revision: D21924676 Pulled By: ajkr fbshipit-source-id: ea4288a2eefa8112eb6c651a671c1de18c12e538
4 years ago
auto iter = reader.NewDataIterator(icmp.user_comparator(),
kDisableGlobalSequenceNumber);
// find a random key in the lookaside array
int index = rnd.Uniform(num_records);
std::string ukey(keys[index] + "1" /* existing key marker */);
InternalKey ikey(ukey, 0, kTypeValue);
// search in block for this key
bool may_exist = iter->SeekForGet(ikey.Encode().ToString());
ASSERT_TRUE(may_exist);
ASSERT_TRUE(iter->Valid());
ASSERT_EQ(values[index], iter->value());
delete iter;
}
// random seek non-existent user keys
// In this case A), the user_key cannot be found in HashIndex. The key may
// exist in the next block. So the iter is set invalidated to tell the
// caller to search the next block. This test case belongs to this case A).
//
// Note that for non-existent keys, there is possibility of false positive,
// i.e. the key is still hashed into some restart interval.
// Two additional possible outcome:
// B) linear seek the restart interval and not found, the iter stops at the
// starting of the next restart interval. The key does not exist
// anywhere.
// C) linear seek the restart interval and not found, the iter stops at the
// the end of the block, i.e. restarts_. The key may exist in the next
// block.
// So these combinations are possible when searching non-existent user_key:
//
// case# may_exist iter->Valid()
// A true false
// B false true
// C true false
for (int i = 0; i < num_records; i++) {
Separate internal and user key comparators in `BlockIter` (#6944) Summary: Replace `BlockIter::comparator_` and `IndexBlockIter::user_comparator_wrapper_` with a concrete `UserComparatorWrapper` and `InternalKeyComparator`. The motivation for this change was the inconvenience of not knowing the concrete type of `BlockIter::comparator_`, which prevented calling specialized internal key comparison functions to optimize comparison of keys with global seqno applied. Pull Request resolved: https://github.com/facebook/rocksdb/pull/6944 Test Plan: benchmark setup -- single file DBs, in-memory, no compression. "normal_db" created by regular flush; "ingestion_db" created by ingesting a file. Both DBs have same contents. ``` $ TEST_TMPDIR=/dev/shm/normal_db/ ./db_bench -benchmarks=fillrandom,compact -write_buffer_size=10485760000 -disable_auto_compactions=true -compression_type=none -num=1000000 $ ./ldb write_extern_sst ./tmp.sst --db=/dev/shm/ingestion_db/dbbench/ --compression_type=no --hex --create_if_missing < <(./sst_dump --command=scan --output_hex --file=/dev/shm/normal_db/dbbench/000007.sst | awk 'began {print "0x" substr($1, 2, length($1) - 2), "==>", "0x" $5} ; /^Sst file format: block-based/ {began=1}') $ ./ldb ingest_extern_sst ./tmp.sst --db=/dev/shm/ingestion_db/dbbench/ ``` benchmark run command: ``` $ TEST_TMPDIR=/dev/shm/$DB/ ./db_bench -benchmarks=seekrandom -seek_nexts=$SEEK_NEXT -use_existing_db=true -cache_index_and_filter_blocks=false -num=1000000 -cache_size=0 -threads=1 -reads=200000000 -mmap_read=1 -verify_checksum=false ``` results: perf improved marginally for ingestion_db and did not change significantly for normal_db: SEEK_NEXT | DB | code | ops/sec | % change -- | -- | -- | -- | -- 0 | normal_db | master | 350880 |   0 | normal_db | PR6944 | 351040 | 0.0 0 | ingestion_db | master | 343255 |   0 | ingestion_db | PR6944 | 349424 | 1.8 10 | normal_db | master | 218711 |   10 | normal_db | PR6944 | 217892 | -0.4 10 | ingestion_db | master | 220334 |   10 | ingestion_db | PR6944 | 226437 | 2.8 Reviewed By: pdillinger Differential Revision: D21924676 Pulled By: ajkr fbshipit-source-id: ea4288a2eefa8112eb6c651a671c1de18c12e538
4 years ago
auto iter = reader.NewDataIterator(icmp.user_comparator(),
kDisableGlobalSequenceNumber);
// find a random key in the lookaside array
int index = rnd.Uniform(num_records);
std::string ukey(keys[index] + "0" /* non-existing key marker */);
InternalKey ikey(ukey, 0, kTypeValue);
// search in block for this key
bool may_exist = iter->SeekForGet(ikey.Encode().ToString());
if (!may_exist) {
ASSERT_TRUE(iter->Valid());
}
if (!iter->Valid()) {
ASSERT_TRUE(may_exist);
}
delete iter;
}
}
// helper routine for DataBlockHashIndex.BlockBoundary
void TestBoundary(InternalKey& ik1, std::string& v1, InternalKey& ik2,
std::string& v2, InternalKey& seek_ikey,
GetContext& get_context, Options& options) {
std::unique_ptr<WritableFileWriter> file_writer;
std::unique_ptr<RandomAccessFileReader> file_reader;
std::unique_ptr<TableReader> table_reader;
int level_ = -1;
std::vector<std::string> keys;
const ImmutableOptions ioptions(options);
const MutableCFOptions moptions(options);
const InternalKeyComparator internal_comparator(options.comparator);
EnvOptions soptions;
soptions.use_mmap_reads = ioptions.allow_mmap_reads;
test::StringSink* sink = new test::StringSink();
std::unique_ptr<FSWritableFile> f(sink);
file_writer.reset(
new WritableFileWriter(std::move(f), "" /* don't care */, FileOptions()));
std::unique_ptr<TableBuilder> builder;
IntTblPropCollectorFactories int_tbl_prop_collector_factories;
std::string column_family_name;
builder.reset(ioptions.table_factory->NewTableBuilder(
TableBuilderOptions(
ioptions, moptions, internal_comparator,
&int_tbl_prop_collector_factories, options.compression,
Add more LSM info to FilterBuildingContext (#8246) Summary: Add `num_levels`, `is_bottommost`, and table file creation `reason` to `FilterBuildingContext`, in anticipation of more powerful Bloom-like filter support. To support this, added `is_bottommost` and `reason` to `TableBuilderOptions`, which allowed removing `reason` parameter from `rocksdb::BuildTable`. I attempted to remove `skip_filters` from `TableBuilderOptions`, because filter construction decisions should arise from options, not one-off parameters. I could not completely remove it because the public API for SstFileWriter takes a `skip_filters` parameter, and translating this into an option change would mean awkwardly replacing the table_factory if it is BlockBasedTableFactory with new filter_policy=nullptr option. I marked this public skip_filters option as deprecated because of this oddity. (skip_filters on the read side probably makes sense.) At least `skip_filters` is now largely hidden for users of `TableBuilderOptions` and is no longer used for implementing the optimize_filters_for_hits option. Bringing the logic for that option closer to handling of FilterBuildingContext makes it more obvious that hese two are using the same notion of "bottommost." (Planned: configuration options for Bloom-like filters that generalize `optimize_filters_for_hits`) Recommended follow-up: Try to get away from "bottommost level" naming of things, which is inaccurate (see VersionStorageInfo::RangeMightExistAfterSortedRun), and move to "bottommost run" or just "bottommost." Pull Request resolved: https://github.com/facebook/rocksdb/pull/8246 Test Plan: extended an existing unit test to exercise and check various filter building contexts. Also, existing tests for optimize_filters_for_hits validate some of the "bottommost" handling, which is now closely connected to FilterBuildingContext::is_bottommost through TableBuilderOptions::is_bottommost Reviewed By: mrambacher Differential Revision: D28099346 Pulled By: pdillinger fbshipit-source-id: 2c1072e29c24d4ac404c761a7b7663292372600a
4 years ago
CompressionOptions(),
TablePropertiesCollectorFactory::Context::kUnknownColumnFamily,
column_family_name, level_),
file_writer.get()));
builder->Add(ik1.Encode().ToString(), v1);
builder->Add(ik2.Encode().ToString(), v2);
EXPECT_TRUE(builder->status().ok());
Status s = builder->Finish();
ASSERT_OK(file_writer->Flush());
EXPECT_TRUE(s.ok()) << s.ToString();
EXPECT_EQ(sink->contents().size(), builder->FileSize());
// Open the table
test::StringSource* source = new test::StringSource(
sink->contents(), 0 /*uniq_id*/, ioptions.allow_mmap_reads);
std::unique_ptr<FSRandomAccessFile> file(source);
file_reader.reset(new RandomAccessFileReader(std::move(file), "test"));
const bool kSkipFilters = true;
const bool kImmortal = true;
ASSERT_OK(ioptions.table_factory->NewTableReader(
Fast path for detecting unchanged prefix_extractor (#9407) Summary: Fixes a major performance regression in 6.26, where extra CPU is spent in SliceTransform::AsString when reads involve a prefix_extractor (Get, MultiGet, Seek). Common case performance is now better than 6.25. This change creates a "fast path" for verifying that the current prefix extractor is unchanged and compatible with what was used to generate a table file. This fast path detects the common case by pointer comparison on the current prefix_extractor and a "known good" prefix extractor (if applicable) that is saved at the time the table reader is opened. The "known good" prefix extractor is saved as another shared_ptr copy (in an existing field, however) to ensure the pointer is not recycled. When the prefix_extractor has changed to a different instance but same compatible configuration (rare, odd), performance is still a regression compared to 6.25, but this is likely acceptable because of the oddity of such a case. The performance of incompatible prefix_extractor is essentially unchanged. Also fixed a minor case (ForwardIterator) where a prefix_extractor could be used via a raw pointer after being freed as a shared_ptr, if replaced via SetOptions. Pull Request resolved: https://github.com/facebook/rocksdb/pull/9407 Test Plan: ## Performance Populate DB with `TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -benchmarks=fillrandom -num=10000000 -disable_wal=1 -write_buffer_size=10000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -prefix_size=12` Running head-to-head comparisons simultaneously with `TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -use_existing_db -readonly -benchmarks=seekrandom -num=10000000 -duration=20 -disable_wal=1 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -prefix_size=12` Below each is compared by ops/sec vs. baseline which is version 6.25 (multiple baseline runs because of variable machine load) v6.26: 4833 vs. 6698 (<- major regression!) v6.27: 4737 vs. 6397 (still) New: 6704 vs. 6461 (better than baseline in common case) Disabled fastpath: 4843 vs. 6389 (e.g. if prefix extractor instance changes but is still compatible) Changed prefix size (no usable filter) in new: 787 vs. 5927 Changed prefix size (no usable filter) in new & baseline: 773 vs. 784 Reviewed By: mrambacher Differential Revision: D33677812 Pulled By: pdillinger fbshipit-source-id: 571d9711c461fb97f957378a061b7e7dbc4d6a76
3 years ago
TableReaderOptions(ioptions, moptions.prefix_extractor, soptions,
Block per key-value checksum (#11287) Summary: add option `block_protection_bytes_per_key` and implementation for block per key-value checksum. The main changes are 1. checksum construction and verification in block.cc/h 2. pass the option `block_protection_bytes_per_key` around (mainly for methods defined in table_cache.h) 3. unit tests/crash test updates Tests: * Added unit tests * Crash test: `python3 tools/db_crashtest.py blackbox --simple --block_protection_bytes_per_key=1 --write_buffer_size=1048576` Follow up (maybe as a separate PR): make sure corruption status returned from BlockIters are correctly handled. Performance: Turning on block per KV protection has a non-trivial negative impact on read performance and costs additional memory. For memory, each block includes additional 24 bytes for checksum-related states beside checksum itself. For CPU, I set up a DB of size ~1.2GB with 5M keys (32 bytes key and 200 bytes value) which compacts to ~5 SST files (target file size 256 MB) in L6 without compression. I tested readrandom performance with various block cache size (to mimic various cache hit rates): ``` SETUP make OPTIMIZE_LEVEL="-O3" USE_LTO=1 DEBUG_LEVEL=0 -j32 db_bench ./db_bench -benchmarks=fillseq,compact0,waitforcompaction,compact,waitforcompaction -write_buffer_size=33554432 -level_compaction_dynamic_level_bytes=true -max_background_jobs=8 -target_file_size_base=268435456 --num=5000000 --key_size=32 --value_size=200 --compression_type=none BENCHMARK ./db_bench --use_existing_db -benchmarks=readtocache,readrandom[-X10] --num=5000000 --key_size=32 --disable_auto_compactions --reads=1000000 --block_protection_bytes_per_key=[0|1] --cache_size=$CACHESIZE The readrandom ops/sec looks like the following: Block cache size: 2GB 1.2GB * 0.9 1.2GB * 0.8 1.2GB * 0.5 8MB Main 240805 223604 198176 161653 139040 PR prot_bytes=0 238691 226693 200127 161082 141153 PR prot_bytes=1 214983 193199 178532 137013 108211 prot_bytes=1 vs -10% -15% -10.8% -15% -23% prot_bytes=0 ``` The benchmark has a lot of variance, but there was a 5% to 25% regression in this benchmark with different cache hit rates. Pull Request resolved: https://github.com/facebook/rocksdb/pull/11287 Reviewed By: ajkr Differential Revision: D43970708 Pulled By: cbi42 fbshipit-source-id: ef98d898b71779846fa74212b9ec9e08b7183940
2 years ago
internal_comparator,
0 /* block_protection_bytes_per_key */, !kSkipFilters,
!kImmortal, level_),
std::move(file_reader), sink->contents().size(), &table_reader));
// Search using Get()
ReadOptions ro;
ASSERT_OK(table_reader->Get(ro, seek_ikey.Encode().ToString(), &get_context,
moptions.prefix_extractor.get()));
}
TEST(DataBlockHashIndex, BlockBoundary) {
BlockBasedTableOptions table_options;
table_options.data_block_index_type =
BlockBasedTableOptions::kDataBlockBinaryAndHash;
table_options.block_restart_interval = 1;
table_options.block_size = 4096;
Options options;
options.comparator = BytewiseComparator();
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
// insert two large k/v pair. Given that the block_size is 4096, one k/v
// pair will take up one block.
// [ k1/v1 ][ k2/v2 ]
// [ Block N ][ Block N+1 ]
{
// [ "aab"@100 ][ "axy"@10 ]
// | Block N ][ Block N+1 ]
// seek for "axy"@60
std::string uk1("aab");
InternalKey ik1(uk1, 100, kTypeValue);
std::string v1(4100, '1'); // large value
std::string uk2("axy");
InternalKey ik2(uk2, 10, kTypeValue);
std::string v2(4100, '2'); // large value
PinnableSlice value;
std::string seek_ukey("axy");
InternalKey seek_ikey(seek_ukey, 60, kTypeValue);
GetContext get_context(options.comparator, nullptr, nullptr, nullptr,
GetContext::kNotFound, seek_ukey, &value, nullptr,
Add support for wide-column point lookups (#10540) Summary: The patch adds a new API `GetEntity` that can be used to perform wide-column point lookups. It also extends the `Get` code path and the `MemTable` / `MemTableList` and `Version` / `GetContext` logic accordingly so that wide-column entities can be served from both memtables and SSTs. If the result of a lookup is a wide-column entity (`kTypeWideColumnEntity`), it is passed to the application in deserialized form; if it is a plain old key-value (`kTypeValue`), it is presented as a wide-column entity with a single default (anonymous) column. (In contrast, regular `Get` returns plain old key-values as-is, and returns the value of the default column for wide-column entities, see https://github.com/facebook/rocksdb/issues/10483 .) The result of `GetEntity` is a self-contained `PinnableWideColumns` object. `PinnableWideColumns` contains a `PinnableSlice`, which either stores the underlying data in its own buffer or holds on to a cache handle. It also contains a `WideColumns` instance, which indexes the contents of the `PinnableSlice`, so applications can access the values of columns efficiently. There are several pieces of functionality which are currently not supported for wide-column entities: there is currently no `MultiGetEntity` or wide-column iterator; also, `Merge` and `GetMergeOperands` are not supported, and there is no `GetEntity` implementation for read-only and secondary instances. We plan to implement these in future PRs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10540 Test Plan: `make check` Reviewed By: akankshamahajan15 Differential Revision: D38847474 Pulled By: ltamasi fbshipit-source-id: 42311a34ccdfe88b3775e847a5e2a5296e002b5b
2 years ago
nullptr, nullptr, true, nullptr, nullptr);
TestBoundary(ik1, v1, ik2, v2, seek_ikey, get_context, options);
ASSERT_EQ(get_context.State(), GetContext::kFound);
ASSERT_EQ(value, v2);
value.Reset();
}
{
// [ "axy"@100 ][ "axy"@10 ]
// | Block N ][ Block N+1 ]
// seek for "axy"@60
std::string uk1("axy");
InternalKey ik1(uk1, 100, kTypeValue);
std::string v1(4100, '1'); // large value
std::string uk2("axy");
InternalKey ik2(uk2, 10, kTypeValue);
std::string v2(4100, '2'); // large value
PinnableSlice value;
std::string seek_ukey("axy");
InternalKey seek_ikey(seek_ukey, 60, kTypeValue);
GetContext get_context(options.comparator, nullptr, nullptr, nullptr,
GetContext::kNotFound, seek_ukey, &value, nullptr,
Add support for wide-column point lookups (#10540) Summary: The patch adds a new API `GetEntity` that can be used to perform wide-column point lookups. It also extends the `Get` code path and the `MemTable` / `MemTableList` and `Version` / `GetContext` logic accordingly so that wide-column entities can be served from both memtables and SSTs. If the result of a lookup is a wide-column entity (`kTypeWideColumnEntity`), it is passed to the application in deserialized form; if it is a plain old key-value (`kTypeValue`), it is presented as a wide-column entity with a single default (anonymous) column. (In contrast, regular `Get` returns plain old key-values as-is, and returns the value of the default column for wide-column entities, see https://github.com/facebook/rocksdb/issues/10483 .) The result of `GetEntity` is a self-contained `PinnableWideColumns` object. `PinnableWideColumns` contains a `PinnableSlice`, which either stores the underlying data in its own buffer or holds on to a cache handle. It also contains a `WideColumns` instance, which indexes the contents of the `PinnableSlice`, so applications can access the values of columns efficiently. There are several pieces of functionality which are currently not supported for wide-column entities: there is currently no `MultiGetEntity` or wide-column iterator; also, `Merge` and `GetMergeOperands` are not supported, and there is no `GetEntity` implementation for read-only and secondary instances. We plan to implement these in future PRs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10540 Test Plan: `make check` Reviewed By: akankshamahajan15 Differential Revision: D38847474 Pulled By: ltamasi fbshipit-source-id: 42311a34ccdfe88b3775e847a5e2a5296e002b5b
2 years ago
nullptr, nullptr, true, nullptr, nullptr);
TestBoundary(ik1, v1, ik2, v2, seek_ikey, get_context, options);
ASSERT_EQ(get_context.State(), GetContext::kFound);
ASSERT_EQ(value, v2);
value.Reset();
}
{
// [ "axy"@100 ][ "axy"@10 ]
// | Block N ][ Block N+1 ]
// seek for "axy"@120
std::string uk1("axy");
InternalKey ik1(uk1, 100, kTypeValue);
std::string v1(4100, '1'); // large value
std::string uk2("axy");
InternalKey ik2(uk2, 10, kTypeValue);
std::string v2(4100, '2'); // large value
PinnableSlice value;
std::string seek_ukey("axy");
InternalKey seek_ikey(seek_ukey, 120, kTypeValue);
GetContext get_context(options.comparator, nullptr, nullptr, nullptr,
GetContext::kNotFound, seek_ukey, &value, nullptr,
Add support for wide-column point lookups (#10540) Summary: The patch adds a new API `GetEntity` that can be used to perform wide-column point lookups. It also extends the `Get` code path and the `MemTable` / `MemTableList` and `Version` / `GetContext` logic accordingly so that wide-column entities can be served from both memtables and SSTs. If the result of a lookup is a wide-column entity (`kTypeWideColumnEntity`), it is passed to the application in deserialized form; if it is a plain old key-value (`kTypeValue`), it is presented as a wide-column entity with a single default (anonymous) column. (In contrast, regular `Get` returns plain old key-values as-is, and returns the value of the default column for wide-column entities, see https://github.com/facebook/rocksdb/issues/10483 .) The result of `GetEntity` is a self-contained `PinnableWideColumns` object. `PinnableWideColumns` contains a `PinnableSlice`, which either stores the underlying data in its own buffer or holds on to a cache handle. It also contains a `WideColumns` instance, which indexes the contents of the `PinnableSlice`, so applications can access the values of columns efficiently. There are several pieces of functionality which are currently not supported for wide-column entities: there is currently no `MultiGetEntity` or wide-column iterator; also, `Merge` and `GetMergeOperands` are not supported, and there is no `GetEntity` implementation for read-only and secondary instances. We plan to implement these in future PRs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10540 Test Plan: `make check` Reviewed By: akankshamahajan15 Differential Revision: D38847474 Pulled By: ltamasi fbshipit-source-id: 42311a34ccdfe88b3775e847a5e2a5296e002b5b
2 years ago
nullptr, nullptr, true, nullptr, nullptr);
TestBoundary(ik1, v1, ik2, v2, seek_ikey, get_context, options);
ASSERT_EQ(get_context.State(), GetContext::kFound);
ASSERT_EQ(value, v1);
value.Reset();
}
{
// [ "axy"@100 ][ "axy"@10 ]
// | Block N ][ Block N+1 ]
// seek for "axy"@5
std::string uk1("axy");
InternalKey ik1(uk1, 100, kTypeValue);
std::string v1(4100, '1'); // large value
std::string uk2("axy");
InternalKey ik2(uk2, 10, kTypeValue);
std::string v2(4100, '2'); // large value
PinnableSlice value;
std::string seek_ukey("axy");
InternalKey seek_ikey(seek_ukey, 5, kTypeValue);
GetContext get_context(options.comparator, nullptr, nullptr, nullptr,
GetContext::kNotFound, seek_ukey, &value, nullptr,
Add support for wide-column point lookups (#10540) Summary: The patch adds a new API `GetEntity` that can be used to perform wide-column point lookups. It also extends the `Get` code path and the `MemTable` / `MemTableList` and `Version` / `GetContext` logic accordingly so that wide-column entities can be served from both memtables and SSTs. If the result of a lookup is a wide-column entity (`kTypeWideColumnEntity`), it is passed to the application in deserialized form; if it is a plain old key-value (`kTypeValue`), it is presented as a wide-column entity with a single default (anonymous) column. (In contrast, regular `Get` returns plain old key-values as-is, and returns the value of the default column for wide-column entities, see https://github.com/facebook/rocksdb/issues/10483 .) The result of `GetEntity` is a self-contained `PinnableWideColumns` object. `PinnableWideColumns` contains a `PinnableSlice`, which either stores the underlying data in its own buffer or holds on to a cache handle. It also contains a `WideColumns` instance, which indexes the contents of the `PinnableSlice`, so applications can access the values of columns efficiently. There are several pieces of functionality which are currently not supported for wide-column entities: there is currently no `MultiGetEntity` or wide-column iterator; also, `Merge` and `GetMergeOperands` are not supported, and there is no `GetEntity` implementation for read-only and secondary instances. We plan to implement these in future PRs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10540 Test Plan: `make check` Reviewed By: akankshamahajan15 Differential Revision: D38847474 Pulled By: ltamasi fbshipit-source-id: 42311a34ccdfe88b3775e847a5e2a5296e002b5b
2 years ago
nullptr, nullptr, true, nullptr, nullptr);
TestBoundary(ik1, v1, ik2, v2, seek_ikey, get_context, options);
ASSERT_EQ(get_context.State(), GetContext::kNotFound);
value.Reset();
}
}
} // namespace ROCKSDB_NAMESPACE
int main(int argc, char** argv) {
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}