|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
//
|
|
|
|
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
|
|
|
//
|
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
|
|
|
|
#include "db/internal_stats.h"
|
|
|
|
|
|
|
|
#include <algorithm>
|
|
|
|
#include <cinttypes>
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
#include <cstddef>
|
|
|
|
#include <limits>
|
Introduce a blob file reader class (#7461)
Summary:
The patch adds a class called `BlobFileReader` that can be used to retrieve blobs
using the information available in blob references (e.g. blob file number, offset, and
size). This will come in handy when implementing blob support for `Get`, `MultiGet`,
and iterators, and also for compaction/garbage collection.
When a `BlobFileReader` object is created (using the factory method `Create`),
it first checks whether the specified file is potentially valid by comparing the file
size against the combined size of the blob file header and footer (files smaller than
the threshold are considered malformed). Then, it opens the file, and reads and verifies
the header and footer. The verification involves magic number/CRC checks
as well as checking for unexpected header/footer fields, e.g. incorrect column family ID
or TTL blob files.
Blobs can be retrieved using `GetBlob`. `GetBlob` validates the offset and compression
type passed by the caller (because of the presence of the header and footer, the
specified offset cannot be too close to the start/end of the file; also, the compression type
has to match the one in the blob file header), and retrieves and potentially verifies and
uncompresses the blob. In particular, when `ReadOptions::verify_checksums` is set,
`BlobFileReader` reads the blob record header as well (as opposed to just the blob itself)
and verifies the key/value size, the key itself, as well as the CRC of the blob record header
and the key/value pair.
In addition, the patch exposes the compression type from `BlobIndex` (both using an
accessor and via `DebugString`), and adds a blob file read latency histogram to
`InternalStats` that can be used with `BlobFileReader`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7461
Test Plan: `make check`
Reviewed By: riversand963
Differential Revision: D23999219
Pulled By: ltamasi
fbshipit-source-id: deb6b1160d251258b308d5156e2ec063c3e12e5e
4 years ago
|
|
|
#include <sstream>
|
|
|
|
#include <string>
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
#include <utility>
|
|
|
|
#include <vector>
|
|
|
|
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
#include "cache/cache_entry_roles.h"
|
|
|
|
#include "cache/cache_entry_stats.h"
|
|
|
|
#include "db/column_family.h"
|
|
|
|
#include "db/db_impl/db_impl.h"
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
#include "rocksdb/system_clock.h"
|
|
|
|
#include "rocksdb/table.h"
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
#include "table/block_based/cachable_entry.h"
|
|
|
|
#include "util/string_util.h"
|
|
|
|
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
|
|
|
|
#ifndef ROCKSDB_LITE
|
|
|
|
|
|
|
|
const std::map<LevelStatType, LevelStat> InternalStats::compaction_level_stats =
|
|
|
|
{
|
|
|
|
{LevelStatType::NUM_FILES, LevelStat{"NumFiles", "Files"}},
|
|
|
|
{LevelStatType::COMPACTED_FILES,
|
|
|
|
LevelStat{"CompactedFiles", "CompactedFiles"}},
|
|
|
|
{LevelStatType::SIZE_BYTES, LevelStat{"SizeBytes", "Size"}},
|
|
|
|
{LevelStatType::SCORE, LevelStat{"Score", "Score"}},
|
|
|
|
{LevelStatType::READ_GB, LevelStat{"ReadGB", "Read(GB)"}},
|
|
|
|
{LevelStatType::RN_GB, LevelStat{"RnGB", "Rn(GB)"}},
|
|
|
|
{LevelStatType::RNP1_GB, LevelStat{"Rnp1GB", "Rnp1(GB)"}},
|
|
|
|
{LevelStatType::WRITE_GB, LevelStat{"WriteGB", "Write(GB)"}},
|
|
|
|
{LevelStatType::W_NEW_GB, LevelStat{"WnewGB", "Wnew(GB)"}},
|
|
|
|
{LevelStatType::MOVED_GB, LevelStat{"MovedGB", "Moved(GB)"}},
|
|
|
|
{LevelStatType::WRITE_AMP, LevelStat{"WriteAmp", "W-Amp"}},
|
|
|
|
{LevelStatType::READ_MBPS, LevelStat{"ReadMBps", "Rd(MB/s)"}},
|
|
|
|
{LevelStatType::WRITE_MBPS, LevelStat{"WriteMBps", "Wr(MB/s)"}},
|
|
|
|
{LevelStatType::COMP_SEC, LevelStat{"CompSec", "Comp(sec)"}},
|
|
|
|
{LevelStatType::COMP_CPU_SEC,
|
|
|
|
LevelStat{"CompMergeCPU", "CompMergeCPU(sec)"}},
|
|
|
|
{LevelStatType::COMP_COUNT, LevelStat{"CompCount", "Comp(cnt)"}},
|
|
|
|
{LevelStatType::AVG_SEC, LevelStat{"AvgSec", "Avg(sec)"}},
|
|
|
|
{LevelStatType::KEY_IN, LevelStat{"KeyIn", "KeyIn"}},
|
|
|
|
{LevelStatType::KEY_DROP, LevelStat{"KeyDrop", "KeyDrop"}},
|
|
|
|
{LevelStatType::R_BLOB_GB, LevelStat{"RblobGB", "Rblob(GB)"}},
|
|
|
|
{LevelStatType::W_BLOB_GB, LevelStat{"WblobGB", "Wblob(GB)"}},
|
|
|
|
};
|
|
|
|
|
|
|
|
namespace {
|
|
|
|
const double kMB = 1048576.0;
|
|
|
|
const double kGB = kMB * 1024;
|
|
|
|
const double kMicrosInSec = 1000000.0;
|
|
|
|
|
|
|
|
void PrintLevelStatsHeader(char* buf, size_t len, const std::string& cf_name,
|
|
|
|
const std::string& group_by) {
|
|
|
|
int written_size =
|
|
|
|
snprintf(buf, len, "\n** Compaction Stats [%s] **\n", cf_name.c_str());
|
|
|
|
written_size = std::min(written_size, static_cast<int>(len));
|
|
|
|
auto hdr = [](LevelStatType t) {
|
|
|
|
return InternalStats::compaction_level_stats.at(t).header_name.c_str();
|
|
|
|
};
|
|
|
|
int line_size = snprintf(
|
|
|
|
buf + written_size, len - written_size,
|
|
|
|
"%s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s %s "
|
|
|
|
"%s\n",
|
|
|
|
// Note that we skip COMPACTED_FILES and merge it with Files column
|
|
|
|
group_by.c_str(), hdr(LevelStatType::NUM_FILES),
|
|
|
|
hdr(LevelStatType::SIZE_BYTES), hdr(LevelStatType::SCORE),
|
|
|
|
hdr(LevelStatType::READ_GB), hdr(LevelStatType::RN_GB),
|
|
|
|
hdr(LevelStatType::RNP1_GB), hdr(LevelStatType::WRITE_GB),
|
|
|
|
hdr(LevelStatType::W_NEW_GB), hdr(LevelStatType::MOVED_GB),
|
|
|
|
hdr(LevelStatType::WRITE_AMP), hdr(LevelStatType::READ_MBPS),
|
|
|
|
hdr(LevelStatType::WRITE_MBPS), hdr(LevelStatType::COMP_SEC),
|
|
|
|
hdr(LevelStatType::COMP_CPU_SEC), hdr(LevelStatType::COMP_COUNT),
|
|
|
|
hdr(LevelStatType::AVG_SEC), hdr(LevelStatType::KEY_IN),
|
|
|
|
hdr(LevelStatType::KEY_DROP), hdr(LevelStatType::R_BLOB_GB),
|
|
|
|
hdr(LevelStatType::W_BLOB_GB));
|
|
|
|
|
|
|
|
written_size += line_size;
|
|
|
|
written_size = std::min(written_size, static_cast<int>(len));
|
|
|
|
snprintf(buf + written_size, len - written_size, "%s\n",
|
|
|
|
std::string(line_size, '-').c_str());
|
|
|
|
}
|
|
|
|
|
|
|
|
void PrepareLevelStats(std::map<LevelStatType, double>* level_stats,
|
|
|
|
int num_files, int being_compacted,
|
|
|
|
double total_file_size, double score, double w_amp,
|
|
|
|
const InternalStats::CompactionStats& stats) {
|
|
|
|
const uint64_t bytes_read = stats.bytes_read_non_output_levels +
|
|
|
|
stats.bytes_read_output_level +
|
|
|
|
stats.bytes_read_blob;
|
|
|
|
const uint64_t bytes_written = stats.bytes_written + stats.bytes_written_blob;
|
|
|
|
const int64_t bytes_new = stats.bytes_written - stats.bytes_read_output_level;
|
|
|
|
const double elapsed = (stats.micros + 1) / kMicrosInSec;
|
|
|
|
|
|
|
|
(*level_stats)[LevelStatType::NUM_FILES] = num_files;
|
|
|
|
(*level_stats)[LevelStatType::COMPACTED_FILES] = being_compacted;
|
|
|
|
(*level_stats)[LevelStatType::SIZE_BYTES] = total_file_size;
|
|
|
|
(*level_stats)[LevelStatType::SCORE] = score;
|
|
|
|
(*level_stats)[LevelStatType::READ_GB] = bytes_read / kGB;
|
|
|
|
(*level_stats)[LevelStatType::RN_GB] =
|
|
|
|
stats.bytes_read_non_output_levels / kGB;
|
|
|
|
(*level_stats)[LevelStatType::RNP1_GB] = stats.bytes_read_output_level / kGB;
|
|
|
|
(*level_stats)[LevelStatType::WRITE_GB] = stats.bytes_written / kGB;
|
|
|
|
(*level_stats)[LevelStatType::W_NEW_GB] = bytes_new / kGB;
|
|
|
|
(*level_stats)[LevelStatType::MOVED_GB] = stats.bytes_moved / kGB;
|
|
|
|
(*level_stats)[LevelStatType::WRITE_AMP] = w_amp;
|
|
|
|
(*level_stats)[LevelStatType::READ_MBPS] = bytes_read / kMB / elapsed;
|
|
|
|
(*level_stats)[LevelStatType::WRITE_MBPS] = bytes_written / kMB / elapsed;
|
|
|
|
(*level_stats)[LevelStatType::COMP_SEC] = stats.micros / kMicrosInSec;
|
|
|
|
(*level_stats)[LevelStatType::COMP_CPU_SEC] = stats.cpu_micros / kMicrosInSec;
|
|
|
|
(*level_stats)[LevelStatType::COMP_COUNT] = stats.count;
|
|
|
|
(*level_stats)[LevelStatType::AVG_SEC] =
|
|
|
|
stats.count == 0 ? 0 : stats.micros / kMicrosInSec / stats.count;
|
|
|
|
(*level_stats)[LevelStatType::KEY_IN] =
|
|
|
|
static_cast<double>(stats.num_input_records);
|
|
|
|
(*level_stats)[LevelStatType::KEY_DROP] =
|
|
|
|
static_cast<double>(stats.num_dropped_records);
|
|
|
|
(*level_stats)[LevelStatType::R_BLOB_GB] = stats.bytes_read_blob / kGB;
|
|
|
|
(*level_stats)[LevelStatType::W_BLOB_GB] = stats.bytes_written_blob / kGB;
|
|
|
|
}
|
|
|
|
|
|
|
|
void PrintLevelStats(char* buf, size_t len, const std::string& name,
|
|
|
|
const std::map<LevelStatType, double>& stat_value) {
|
|
|
|
snprintf(
|
|
|
|
buf, len,
|
|
|
|
"%4s " /* Level */
|
|
|
|
"%6d/%-3d " /* Files */
|
|
|
|
"%8s " /* Size */
|
|
|
|
"%5.1f " /* Score */
|
|
|
|
"%8.1f " /* Read(GB) */
|
|
|
|
"%7.1f " /* Rn(GB) */
|
|
|
|
"%8.1f " /* Rnp1(GB) */
|
|
|
|
"%9.1f " /* Write(GB) */
|
|
|
|
"%8.1f " /* Wnew(GB) */
|
|
|
|
"%9.1f " /* Moved(GB) */
|
|
|
|
"%5.1f " /* W-Amp */
|
|
|
|
"%8.1f " /* Rd(MB/s) */
|
|
|
|
"%8.1f " /* Wr(MB/s) */
|
|
|
|
"%9.2f " /* Comp(sec) */
|
|
|
|
"%17.2f " /* CompMergeCPU(sec) */
|
|
|
|
"%9d " /* Comp(cnt) */
|
|
|
|
"%8.3f " /* Avg(sec) */
|
|
|
|
"%7s " /* KeyIn */
|
|
|
|
"%6s " /* KeyDrop */
|
|
|
|
"%9.1f " /* Rblob(GB) */
|
|
|
|
"%9.1f\n", /* Wblob(GB) */
|
|
|
|
name.c_str(), static_cast<int>(stat_value.at(LevelStatType::NUM_FILES)),
|
|
|
|
static_cast<int>(stat_value.at(LevelStatType::COMPACTED_FILES)),
|
|
|
|
BytesToHumanString(
|
|
|
|
static_cast<uint64_t>(stat_value.at(LevelStatType::SIZE_BYTES)))
|
|
|
|
.c_str(),
|
|
|
|
stat_value.at(LevelStatType::SCORE),
|
|
|
|
stat_value.at(LevelStatType::READ_GB),
|
|
|
|
stat_value.at(LevelStatType::RN_GB),
|
|
|
|
stat_value.at(LevelStatType::RNP1_GB),
|
|
|
|
stat_value.at(LevelStatType::WRITE_GB),
|
|
|
|
stat_value.at(LevelStatType::W_NEW_GB),
|
|
|
|
stat_value.at(LevelStatType::MOVED_GB),
|
|
|
|
stat_value.at(LevelStatType::WRITE_AMP),
|
|
|
|
stat_value.at(LevelStatType::READ_MBPS),
|
|
|
|
stat_value.at(LevelStatType::WRITE_MBPS),
|
|
|
|
stat_value.at(LevelStatType::COMP_SEC),
|
|
|
|
stat_value.at(LevelStatType::COMP_CPU_SEC),
|
|
|
|
static_cast<int>(stat_value.at(LevelStatType::COMP_COUNT)),
|
|
|
|
stat_value.at(LevelStatType::AVG_SEC),
|
|
|
|
NumberToHumanString(
|
|
|
|
static_cast<std::int64_t>(stat_value.at(LevelStatType::KEY_IN)))
|
|
|
|
.c_str(),
|
|
|
|
NumberToHumanString(
|
|
|
|
static_cast<std::int64_t>(stat_value.at(LevelStatType::KEY_DROP)))
|
|
|
|
.c_str(),
|
|
|
|
stat_value.at(LevelStatType::R_BLOB_GB),
|
|
|
|
stat_value.at(LevelStatType::W_BLOB_GB));
|
|
|
|
}
|
|
|
|
|
|
|
|
void PrintLevelStats(char* buf, size_t len, const std::string& name,
|
|
|
|
int num_files, int being_compacted, double total_file_size,
|
|
|
|
double score, double w_amp,
|
|
|
|
const InternalStats::CompactionStats& stats) {
|
|
|
|
std::map<LevelStatType, double> level_stats;
|
|
|
|
PrepareLevelStats(&level_stats, num_files, being_compacted, total_file_size,
|
|
|
|
score, w_amp, stats);
|
|
|
|
PrintLevelStats(buf, len, name, level_stats);
|
|
|
|
}
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
|
|
|
|
// Assumes that trailing numbers represent an optional argument. This requires
|
|
|
|
// property names to not end with numbers.
|
|
|
|
std::pair<Slice, Slice> GetPropertyNameAndArg(const Slice& property) {
|
|
|
|
Slice name = property, arg = property;
|
|
|
|
size_t sfx_len = 0;
|
|
|
|
while (sfx_len < property.size() &&
|
|
|
|
isdigit(property[property.size() - sfx_len - 1])) {
|
|
|
|
++sfx_len;
|
|
|
|
}
|
|
|
|
name.remove_suffix(sfx_len);
|
|
|
|
arg.remove_prefix(property.size() - sfx_len);
|
|
|
|
return {name, arg};
|
|
|
|
}
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
} // anonymous namespace
|
|
|
|
|
|
|
|
static const std::string rocksdb_prefix = "rocksdb.";
|
|
|
|
|
|
|
|
static const std::string num_files_at_level_prefix = "num-files-at-level";
|
|
|
|
static const std::string compression_ratio_at_level_prefix =
|
|
|
|
"compression-ratio-at-level";
|
|
|
|
static const std::string allstats = "stats";
|
|
|
|
static const std::string sstables = "sstables";
|
|
|
|
static const std::string cfstats = "cfstats";
|
|
|
|
static const std::string cfstats_no_file_histogram =
|
|
|
|
"cfstats-no-file-histogram";
|
|
|
|
static const std::string cf_file_histogram = "cf-file-histogram";
|
|
|
|
static const std::string dbstats = "dbstats";
|
|
|
|
static const std::string levelstats = "levelstats";
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
static const std::string block_cache_entry_stats = "block-cache-entry-stats";
|
|
|
|
static const std::string num_immutable_mem_table = "num-immutable-mem-table";
|
Support saving history in memtable_list
Summary:
For transactions, we are using the memtables to validate that there are no write conflicts. But after flushing, we don't have any memtables, and transactions could fail to commit. So we want to someone keep around some extra history to use for conflict checking. In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure). It seems like the best place for this is abstracted inside the memtable_list. I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
This diff adds a new parameter to control how much memtable history to keep around after flushing. However, it sounds like people aren't too fond of adding new parameters. So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers. This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit. (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached). So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests. Added testing in memtablelist_test and planning on adding more testing here.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37443
10 years ago
|
|
|
static const std::string num_immutable_mem_table_flushed =
|
|
|
|
"num-immutable-mem-table-flushed";
|
|
|
|
static const std::string mem_table_flush_pending = "mem-table-flush-pending";
|
|
|
|
static const std::string compaction_pending = "compaction-pending";
|
|
|
|
static const std::string background_errors = "background-errors";
|
|
|
|
static const std::string cur_size_active_mem_table =
|
|
|
|
"cur-size-active-mem-table";
|
|
|
|
static const std::string cur_size_all_mem_tables = "cur-size-all-mem-tables";
|
|
|
|
static const std::string size_all_mem_tables = "size-all-mem-tables";
|
|
|
|
static const std::string num_entries_active_mem_table =
|
|
|
|
"num-entries-active-mem-table";
|
|
|
|
static const std::string num_entries_imm_mem_tables =
|
|
|
|
"num-entries-imm-mem-tables";
|
|
|
|
static const std::string num_deletes_active_mem_table =
|
|
|
|
"num-deletes-active-mem-table";
|
|
|
|
static const std::string num_deletes_imm_mem_tables =
|
|
|
|
"num-deletes-imm-mem-tables";
|
|
|
|
static const std::string estimate_num_keys = "estimate-num-keys";
|
|
|
|
static const std::string estimate_table_readers_mem =
|
|
|
|
"estimate-table-readers-mem";
|
|
|
|
static const std::string is_file_deletions_enabled =
|
|
|
|
"is-file-deletions-enabled";
|
|
|
|
static const std::string num_snapshots = "num-snapshots";
|
|
|
|
static const std::string oldest_snapshot_time = "oldest-snapshot-time";
|
|
|
|
static const std::string oldest_snapshot_sequence = "oldest-snapshot-sequence";
|
|
|
|
static const std::string num_live_versions = "num-live-versions";
|
|
|
|
static const std::string current_version_number =
|
|
|
|
"current-super-version-number";
|
|
|
|
static const std::string estimate_live_data_size = "estimate-live-data-size";
|
|
|
|
static const std::string min_log_number_to_keep_str = "min-log-number-to-keep";
|
|
|
|
static const std::string min_obsolete_sst_number_to_keep_str =
|
|
|
|
"min-obsolete-sst-number-to-keep";
|
|
|
|
static const std::string base_level_str = "base-level";
|
|
|
|
static const std::string total_sst_files_size = "total-sst-files-size";
|
|
|
|
static const std::string live_sst_files_size = "live-sst-files-size";
|
|
|
|
static const std::string estimate_pending_comp_bytes =
|
|
|
|
"estimate-pending-compaction-bytes";
|
|
|
|
static const std::string aggregated_table_properties =
|
|
|
|
"aggregated-table-properties";
|
|
|
|
static const std::string aggregated_table_properties_at_level =
|
|
|
|
aggregated_table_properties + "-at-level";
|
|
|
|
static const std::string num_running_compactions = "num-running-compactions";
|
|
|
|
static const std::string num_running_flushes = "num-running-flushes";
|
|
|
|
static const std::string actual_delayed_write_rate =
|
|
|
|
"actual-delayed-write-rate";
|
|
|
|
static const std::string is_write_stopped = "is-write-stopped";
|
|
|
|
static const std::string estimate_oldest_key_time = "estimate-oldest-key-time";
|
|
|
|
static const std::string block_cache_capacity = "block-cache-capacity";
|
|
|
|
static const std::string block_cache_usage = "block-cache-usage";
|
|
|
|
static const std::string block_cache_pinned_usage = "block-cache-pinned-usage";
|
|
|
|
static const std::string options_statistics = "options-statistics";
|
|
|
|
|
|
|
|
const std::string DB::Properties::kNumFilesAtLevelPrefix =
|
|
|
|
rocksdb_prefix + num_files_at_level_prefix;
|
|
|
|
const std::string DB::Properties::kCompressionRatioAtLevelPrefix =
|
|
|
|
rocksdb_prefix + compression_ratio_at_level_prefix;
|
|
|
|
const std::string DB::Properties::kStats = rocksdb_prefix + allstats;
|
|
|
|
const std::string DB::Properties::kSSTables = rocksdb_prefix + sstables;
|
|
|
|
const std::string DB::Properties::kCFStats = rocksdb_prefix + cfstats;
|
|
|
|
const std::string DB::Properties::kCFStatsNoFileHistogram =
|
|
|
|
rocksdb_prefix + cfstats_no_file_histogram;
|
|
|
|
const std::string DB::Properties::kCFFileHistogram =
|
|
|
|
rocksdb_prefix + cf_file_histogram;
|
|
|
|
const std::string DB::Properties::kDBStats = rocksdb_prefix + dbstats;
|
|
|
|
const std::string DB::Properties::kLevelStats = rocksdb_prefix + levelstats;
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
const std::string DB::Properties::kBlockCacheEntryStats =
|
|
|
|
rocksdb_prefix + block_cache_entry_stats;
|
|
|
|
const std::string DB::Properties::kNumImmutableMemTable =
|
|
|
|
rocksdb_prefix + num_immutable_mem_table;
|
|
|
|
const std::string DB::Properties::kNumImmutableMemTableFlushed =
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
rocksdb_prefix + num_immutable_mem_table_flushed;
|
|
|
|
const std::string DB::Properties::kMemTableFlushPending =
|
|
|
|
rocksdb_prefix + mem_table_flush_pending;
|
|
|
|
const std::string DB::Properties::kCompactionPending =
|
|
|
|
rocksdb_prefix + compaction_pending;
|
|
|
|
const std::string DB::Properties::kNumRunningCompactions =
|
|
|
|
rocksdb_prefix + num_running_compactions;
|
|
|
|
const std::string DB::Properties::kNumRunningFlushes =
|
|
|
|
rocksdb_prefix + num_running_flushes;
|
|
|
|
const std::string DB::Properties::kBackgroundErrors =
|
|
|
|
rocksdb_prefix + background_errors;
|
|
|
|
const std::string DB::Properties::kCurSizeActiveMemTable =
|
|
|
|
rocksdb_prefix + cur_size_active_mem_table;
|
|
|
|
const std::string DB::Properties::kCurSizeAllMemTables =
|
|
|
|
rocksdb_prefix + cur_size_all_mem_tables;
|
|
|
|
const std::string DB::Properties::kSizeAllMemTables =
|
|
|
|
rocksdb_prefix + size_all_mem_tables;
|
|
|
|
const std::string DB::Properties::kNumEntriesActiveMemTable =
|
|
|
|
rocksdb_prefix + num_entries_active_mem_table;
|
|
|
|
const std::string DB::Properties::kNumEntriesImmMemTables =
|
|
|
|
rocksdb_prefix + num_entries_imm_mem_tables;
|
|
|
|
const std::string DB::Properties::kNumDeletesActiveMemTable =
|
|
|
|
rocksdb_prefix + num_deletes_active_mem_table;
|
|
|
|
const std::string DB::Properties::kNumDeletesImmMemTables =
|
|
|
|
rocksdb_prefix + num_deletes_imm_mem_tables;
|
|
|
|
const std::string DB::Properties::kEstimateNumKeys =
|
|
|
|
rocksdb_prefix + estimate_num_keys;
|
|
|
|
const std::string DB::Properties::kEstimateTableReadersMem =
|
|
|
|
rocksdb_prefix + estimate_table_readers_mem;
|
|
|
|
const std::string DB::Properties::kIsFileDeletionsEnabled =
|
|
|
|
rocksdb_prefix + is_file_deletions_enabled;
|
|
|
|
const std::string DB::Properties::kNumSnapshots =
|
|
|
|
rocksdb_prefix + num_snapshots;
|
|
|
|
const std::string DB::Properties::kOldestSnapshotTime =
|
|
|
|
rocksdb_prefix + oldest_snapshot_time;
|
|
|
|
const std::string DB::Properties::kOldestSnapshotSequence =
|
|
|
|
rocksdb_prefix + oldest_snapshot_sequence;
|
|
|
|
const std::string DB::Properties::kNumLiveVersions =
|
|
|
|
rocksdb_prefix + num_live_versions;
|
|
|
|
const std::string DB::Properties::kCurrentSuperVersionNumber =
|
|
|
|
rocksdb_prefix + current_version_number;
|
|
|
|
const std::string DB::Properties::kEstimateLiveDataSize =
|
|
|
|
rocksdb_prefix + estimate_live_data_size;
|
|
|
|
const std::string DB::Properties::kMinLogNumberToKeep =
|
|
|
|
rocksdb_prefix + min_log_number_to_keep_str;
|
|
|
|
const std::string DB::Properties::kMinObsoleteSstNumberToKeep =
|
|
|
|
rocksdb_prefix + min_obsolete_sst_number_to_keep_str;
|
|
|
|
const std::string DB::Properties::kTotalSstFilesSize =
|
|
|
|
rocksdb_prefix + total_sst_files_size;
|
|
|
|
const std::string DB::Properties::kLiveSstFilesSize =
|
|
|
|
rocksdb_prefix + live_sst_files_size;
|
|
|
|
const std::string DB::Properties::kBaseLevel = rocksdb_prefix + base_level_str;
|
|
|
|
const std::string DB::Properties::kEstimatePendingCompactionBytes =
|
|
|
|
rocksdb_prefix + estimate_pending_comp_bytes;
|
|
|
|
const std::string DB::Properties::kAggregatedTableProperties =
|
|
|
|
rocksdb_prefix + aggregated_table_properties;
|
|
|
|
const std::string DB::Properties::kAggregatedTablePropertiesAtLevel =
|
|
|
|
rocksdb_prefix + aggregated_table_properties_at_level;
|
|
|
|
const std::string DB::Properties::kActualDelayedWriteRate =
|
|
|
|
rocksdb_prefix + actual_delayed_write_rate;
|
|
|
|
const std::string DB::Properties::kIsWriteStopped =
|
|
|
|
rocksdb_prefix + is_write_stopped;
|
|
|
|
const std::string DB::Properties::kEstimateOldestKeyTime =
|
|
|
|
rocksdb_prefix + estimate_oldest_key_time;
|
|
|
|
const std::string DB::Properties::kBlockCacheCapacity =
|
|
|
|
rocksdb_prefix + block_cache_capacity;
|
|
|
|
const std::string DB::Properties::kBlockCacheUsage =
|
|
|
|
rocksdb_prefix + block_cache_usage;
|
|
|
|
const std::string DB::Properties::kBlockCachePinnedUsage =
|
|
|
|
rocksdb_prefix + block_cache_pinned_usage;
|
|
|
|
const std::string DB::Properties::kOptionsStatistics =
|
|
|
|
rocksdb_prefix + options_statistics;
|
|
|
|
|
|
|
|
const std::unordered_map<std::string, DBPropertyInfo>
|
|
|
|
InternalStats::ppt_name_to_info = {
|
|
|
|
{DB::Properties::kNumFilesAtLevelPrefix,
|
|
|
|
{false, &InternalStats::HandleNumFilesAtLevel, nullptr, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kCompressionRatioAtLevelPrefix,
|
|
|
|
{false, &InternalStats::HandleCompressionRatioAtLevelPrefix, nullptr,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kLevelStats,
|
|
|
|
{false, &InternalStats::HandleLevelStats, nullptr, nullptr, nullptr}},
|
|
|
|
{DB::Properties::kStats,
|
|
|
|
{false, &InternalStats::HandleStats, nullptr, nullptr, nullptr}},
|
|
|
|
{DB::Properties::kCFStats,
|
|
|
|
{false, &InternalStats::HandleCFStats, nullptr,
|
|
|
|
&InternalStats::HandleCFMapStats, nullptr}},
|
|
|
|
{DB::Properties::kCFStatsNoFileHistogram,
|
|
|
|
{false, &InternalStats::HandleCFStatsNoFileHistogram, nullptr, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kCFFileHistogram,
|
|
|
|
{false, &InternalStats::HandleCFFileHistogram, nullptr, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kDBStats,
|
|
|
|
{false, &InternalStats::HandleDBStats, nullptr, nullptr, nullptr}},
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
{DB::Properties::kBlockCacheEntryStats,
|
|
|
|
{false, &InternalStats::HandleBlockCacheEntryStats, nullptr,
|
|
|
|
&InternalStats::HandleBlockCacheEntryStatsMap, nullptr}},
|
|
|
|
{DB::Properties::kSSTables,
|
|
|
|
{false, &InternalStats::HandleSsTables, nullptr, nullptr, nullptr}},
|
|
|
|
{DB::Properties::kAggregatedTableProperties,
|
|
|
|
{false, &InternalStats::HandleAggregatedTableProperties, nullptr,
|
|
|
|
&InternalStats::HandleAggregatedTablePropertiesMap, nullptr}},
|
|
|
|
{DB::Properties::kAggregatedTablePropertiesAtLevel,
|
|
|
|
{false, &InternalStats::HandleAggregatedTablePropertiesAtLevel,
|
|
|
|
nullptr, &InternalStats::HandleAggregatedTablePropertiesAtLevelMap,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumImmutableMemTable,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumImmutableMemTable, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumImmutableMemTableFlushed,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumImmutableMemTableFlushed,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kMemTableFlushPending,
|
|
|
|
{false, nullptr, &InternalStats::HandleMemTableFlushPending, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kCompactionPending,
|
|
|
|
{false, nullptr, &InternalStats::HandleCompactionPending, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kBackgroundErrors,
|
|
|
|
{false, nullptr, &InternalStats::HandleBackgroundErrors, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kCurSizeActiveMemTable,
|
|
|
|
{false, nullptr, &InternalStats::HandleCurSizeActiveMemTable, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kCurSizeAllMemTables,
|
|
|
|
{false, nullptr, &InternalStats::HandleCurSizeAllMemTables, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kSizeAllMemTables,
|
|
|
|
{false, nullptr, &InternalStats::HandleSizeAllMemTables, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumEntriesActiveMemTable,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumEntriesActiveMemTable,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kNumEntriesImmMemTables,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumEntriesImmMemTables, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumDeletesActiveMemTable,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumDeletesActiveMemTable,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kNumDeletesImmMemTables,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumDeletesImmMemTables, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kEstimateNumKeys,
|
|
|
|
{false, nullptr, &InternalStats::HandleEstimateNumKeys, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kEstimateTableReadersMem,
|
|
|
|
{true, nullptr, &InternalStats::HandleEstimateTableReadersMem, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kIsFileDeletionsEnabled,
|
|
|
|
{false, nullptr, &InternalStats::HandleIsFileDeletionsEnabled, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumSnapshots,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumSnapshots, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kOldestSnapshotTime,
|
|
|
|
{false, nullptr, &InternalStats::HandleOldestSnapshotTime, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kOldestSnapshotSequence,
|
|
|
|
{false, nullptr, &InternalStats::HandleOldestSnapshotSequence, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumLiveVersions,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumLiveVersions, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kCurrentSuperVersionNumber,
|
|
|
|
{false, nullptr, &InternalStats::HandleCurrentSuperVersionNumber,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kEstimateLiveDataSize,
|
|
|
|
{true, nullptr, &InternalStats::HandleEstimateLiveDataSize, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kMinLogNumberToKeep,
|
|
|
|
{false, nullptr, &InternalStats::HandleMinLogNumberToKeep, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kMinObsoleteSstNumberToKeep,
|
|
|
|
{false, nullptr, &InternalStats::HandleMinObsoleteSstNumberToKeep,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kBaseLevel,
|
|
|
|
{false, nullptr, &InternalStats::HandleBaseLevel, nullptr, nullptr}},
|
|
|
|
{DB::Properties::kTotalSstFilesSize,
|
|
|
|
{false, nullptr, &InternalStats::HandleTotalSstFilesSize, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kLiveSstFilesSize,
|
|
|
|
{false, nullptr, &InternalStats::HandleLiveSstFilesSize, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kEstimatePendingCompactionBytes,
|
|
|
|
{false, nullptr, &InternalStats::HandleEstimatePendingCompactionBytes,
|
|
|
|
nullptr, nullptr}},
|
|
|
|
{DB::Properties::kNumRunningFlushes,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumRunningFlushes, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kNumRunningCompactions,
|
|
|
|
{false, nullptr, &InternalStats::HandleNumRunningCompactions, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kActualDelayedWriteRate,
|
|
|
|
{false, nullptr, &InternalStats::HandleActualDelayedWriteRate, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kIsWriteStopped,
|
|
|
|
{false, nullptr, &InternalStats::HandleIsWriteStopped, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kEstimateOldestKeyTime,
|
|
|
|
{false, nullptr, &InternalStats::HandleEstimateOldestKeyTime, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kBlockCacheCapacity,
|
|
|
|
{false, nullptr, &InternalStats::HandleBlockCacheCapacity, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kBlockCacheUsage,
|
|
|
|
{false, nullptr, &InternalStats::HandleBlockCacheUsage, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kBlockCachePinnedUsage,
|
|
|
|
{false, nullptr, &InternalStats::HandleBlockCachePinnedUsage, nullptr,
|
|
|
|
nullptr}},
|
|
|
|
{DB::Properties::kOptionsStatistics,
|
|
|
|
{false, nullptr, nullptr, nullptr,
|
|
|
|
&DBImpl::GetPropertyHandleOptionsStatistics}},
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
};
|
|
|
|
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
InternalStats::InternalStats(int num_levels, SystemClock* clock,
|
|
|
|
ColumnFamilyData* cfd)
|
|
|
|
: db_stats_{},
|
|
|
|
cf_stats_value_{},
|
|
|
|
cf_stats_count_{},
|
|
|
|
comp_stats_(num_levels),
|
|
|
|
comp_stats_by_pri_(Env::Priority::TOTAL),
|
|
|
|
file_read_latency_(num_levels),
|
|
|
|
bg_error_count_(0),
|
|
|
|
number_levels_(num_levels),
|
|
|
|
clock_(clock),
|
|
|
|
cfd_(cfd),
|
|
|
|
started_at_(clock->NowMicros()) {}
|
|
|
|
|
|
|
|
Status InternalStats::CollectCacheEntryStats(bool foreground) {
|
|
|
|
// Lazy initialize/reference the collector. It is pinned in cache (through
|
|
|
|
// a shared_ptr) so that it does not get immediately ejected from a full
|
|
|
|
// cache, which would force a re-scan on the next GetStats.
|
|
|
|
if (!cache_entry_stats_collector_) {
|
|
|
|
Cache* block_cache;
|
|
|
|
bool ok = HandleBlockCacheStat(&block_cache);
|
|
|
|
if (ok) {
|
|
|
|
// Extract or create stats collector.
|
|
|
|
Status s = CacheEntryStatsCollector<CacheEntryRoleStats>::GetShared(
|
|
|
|
block_cache, clock_, &cache_entry_stats_collector_);
|
|
|
|
if (!s.ok()) {
|
|
|
|
// Block cache likely under pressure. Scanning could make it worse,
|
|
|
|
// so skip.
|
|
|
|
return s;
|
|
|
|
}
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
} else {
|
|
|
|
return Status::NotFound("block cache not configured");
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
}
|
|
|
|
}
|
|
|
|
assert(cache_entry_stats_collector_);
|
|
|
|
|
|
|
|
// For "background" collections, strictly cap the collection time by
|
|
|
|
// expanding effective cache TTL. For foreground, be more aggressive about
|
|
|
|
// getting latest data.
|
|
|
|
int min_interval_seconds = foreground ? 10 : 180;
|
|
|
|
// 1/500 = max of 0.2% of one CPU thread
|
|
|
|
int min_interval_factor = foreground ? 10 : 500;
|
|
|
|
cache_entry_stats_collector_->GetStats(
|
|
|
|
&cache_entry_stats_, min_interval_seconds, min_interval_factor);
|
|
|
|
return Status::OK();
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
}
|
|
|
|
|
|
|
|
std::function<void(const Slice&, void*, size_t, Cache::DeleterFn)>
|
|
|
|
InternalStats::CacheEntryRoleStats::GetEntryCallback() {
|
|
|
|
return [&](const Slice& /*key*/, void* /*value*/, size_t charge,
|
|
|
|
Cache::DeleterFn deleter) {
|
|
|
|
auto e = role_map_.find(deleter);
|
|
|
|
size_t role_idx;
|
|
|
|
if (e == role_map_.end()) {
|
|
|
|
role_idx = static_cast<size_t>(CacheEntryRole::kMisc);
|
|
|
|
} else {
|
|
|
|
role_idx = static_cast<size_t>(e->second);
|
|
|
|
}
|
|
|
|
entry_counts[role_idx]++;
|
|
|
|
total_charges[role_idx] += charge;
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::CacheEntryRoleStats::BeginCollection(
|
|
|
|
Cache* cache, SystemClock*, uint64_t start_time_micros) {
|
|
|
|
Clear();
|
|
|
|
last_start_time_micros_ = start_time_micros;
|
|
|
|
++collection_count;
|
|
|
|
role_map_ = CopyCacheDeleterRoleMap();
|
|
|
|
std::ostringstream str;
|
|
|
|
str << cache->Name() << "@" << static_cast<void*>(cache);
|
|
|
|
cache_id = str.str();
|
|
|
|
cache_capacity = cache->GetCapacity();
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::CacheEntryRoleStats::EndCollection(
|
|
|
|
Cache*, SystemClock*, uint64_t end_time_micros) {
|
|
|
|
last_end_time_micros_ = end_time_micros;
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::CacheEntryRoleStats::SkippedCollection() {
|
|
|
|
++copies_of_last_collection;
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t InternalStats::CacheEntryRoleStats::GetLastDurationMicros() const {
|
|
|
|
if (last_end_time_micros_ > last_start_time_micros_) {
|
|
|
|
return last_end_time_micros_ - last_start_time_micros_;
|
|
|
|
} else {
|
|
|
|
return 0U;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
std::string InternalStats::CacheEntryRoleStats::ToString(
|
|
|
|
SystemClock* clock) const {
|
|
|
|
std::ostringstream str;
|
|
|
|
str << "Block cache " << cache_id
|
|
|
|
<< " capacity: " << BytesToHumanString(cache_capacity)
|
|
|
|
<< " collections: " << collection_count
|
|
|
|
<< " last_copies: " << copies_of_last_collection
|
|
|
|
<< " last_secs: " << (GetLastDurationMicros() / 1000000.0)
|
|
|
|
<< " secs_since: "
|
|
|
|
<< ((clock->NowMicros() - last_end_time_micros_) / 1000000U) << "\n";
|
|
|
|
str << "Block cache entry stats(count,size,portion):";
|
|
|
|
for (size_t i = 0; i < kNumCacheEntryRoles; ++i) {
|
|
|
|
if (entry_counts[i] > 0) {
|
|
|
|
str << " " << kCacheEntryRoleToCamelString[i] << "(" << entry_counts[i]
|
|
|
|
<< "," << BytesToHumanString(total_charges[i]) << ","
|
|
|
|
<< (100.0 * total_charges[i] / cache_capacity) << "%)";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
str << "\n";
|
|
|
|
return str.str();
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::CacheEntryRoleStats::ToMap(
|
|
|
|
std::map<std::string, std::string>* values, SystemClock* clock) const {
|
|
|
|
values->clear();
|
|
|
|
auto& v = *values;
|
|
|
|
v["id"] = cache_id;
|
|
|
|
v["capacity"] = ROCKSDB_NAMESPACE::ToString(cache_capacity);
|
|
|
|
v["secs_for_last_collection"] =
|
|
|
|
ROCKSDB_NAMESPACE::ToString(GetLastDurationMicros() / 1000000.0);
|
|
|
|
v["secs_since_last_collection"] = ROCKSDB_NAMESPACE::ToString(
|
|
|
|
(clock->NowMicros() - last_end_time_micros_) / 1000000U);
|
|
|
|
for (size_t i = 0; i < kNumCacheEntryRoles; ++i) {
|
|
|
|
std::string role = kCacheEntryRoleToHyphenString[i];
|
|
|
|
v["count." + role] = ROCKSDB_NAMESPACE::ToString(entry_counts[i]);
|
|
|
|
v["bytes." + role] = ROCKSDB_NAMESPACE::ToString(total_charges[i]);
|
|
|
|
v["percent." + role] =
|
|
|
|
ROCKSDB_NAMESPACE::ToString(100.0 * total_charges[i] / cache_capacity);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBlockCacheEntryStats(std::string* value,
|
|
|
|
Slice /*suffix*/) {
|
|
|
|
Status s = CollectCacheEntryStats(/*foreground*/ true);
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*value = cache_entry_stats_.ToString(clock_);
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBlockCacheEntryStatsMap(
|
|
|
|
std::map<std::string, std::string>* values, Slice /*suffix*/) {
|
|
|
|
Status s = CollectCacheEntryStats(/*foreground*/ true);
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
cache_entry_stats_.ToMap(values, clock_);
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
const DBPropertyInfo* GetPropertyInfo(const Slice& property) {
|
|
|
|
std::string ppt_name = GetPropertyNameAndArg(property).first.ToString();
|
|
|
|
auto ppt_info_iter = InternalStats::ppt_name_to_info.find(ppt_name);
|
|
|
|
if (ppt_info_iter == InternalStats::ppt_name_to_info.end()) {
|
|
|
|
return nullptr;
|
|
|
|
}
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return &ppt_info_iter->second;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::GetStringProperty(const DBPropertyInfo& property_info,
|
|
|
|
const Slice& property,
|
|
|
|
std::string* value) {
|
|
|
|
assert(value != nullptr);
|
|
|
|
assert(property_info.handle_string != nullptr);
|
|
|
|
Slice arg = GetPropertyNameAndArg(property).second;
|
|
|
|
return (this->*(property_info.handle_string))(value, arg);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::GetMapProperty(const DBPropertyInfo& property_info,
|
|
|
|
const Slice& property,
|
|
|
|
std::map<std::string, std::string>* value) {
|
|
|
|
assert(value != nullptr);
|
|
|
|
assert(property_info.handle_map != nullptr);
|
|
|
|
Slice arg = GetPropertyNameAndArg(property).second;
|
|
|
|
return (this->*(property_info.handle_map))(value, arg);
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::GetIntProperty(const DBPropertyInfo& property_info,
|
|
|
|
uint64_t* value, DBImpl* db) {
|
|
|
|
assert(value != nullptr);
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
assert(property_info.handle_int != nullptr &&
|
|
|
|
!property_info.need_out_of_mutex);
|
|
|
|
db->mutex_.AssertHeld();
|
|
|
|
return (this->*(property_info.handle_int))(value, db, nullptr /* version */);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::GetIntPropertyOutOfMutex(
|
|
|
|
const DBPropertyInfo& property_info, Version* version, uint64_t* value) {
|
|
|
|
assert(value != nullptr);
|
|
|
|
assert(property_info.handle_int != nullptr &&
|
|
|
|
property_info.need_out_of_mutex);
|
|
|
|
return (this->*(property_info.handle_int))(value, nullptr /* db */, version);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumFilesAtLevel(std::string* value, Slice suffix) {
|
|
|
|
uint64_t level;
|
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool ok = ConsumeDecimalNumber(&suffix, &level) && suffix.empty();
|
|
|
|
if (!ok || static_cast<int>(level) >= number_levels_) {
|
|
|
|
return false;
|
|
|
|
} else {
|
|
|
|
char buf[100];
|
|
|
|
snprintf(buf, sizeof(buf), "%d",
|
|
|
|
vstorage->NumLevelFiles(static_cast<int>(level)));
|
|
|
|
*value = buf;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCompressionRatioAtLevelPrefix(std::string* value,
|
|
|
|
Slice suffix) {
|
|
|
|
uint64_t level;
|
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
|
|
|
bool ok = ConsumeDecimalNumber(&suffix, &level) && suffix.empty();
|
|
|
|
if (!ok || level >= static_cast<uint64_t>(number_levels_)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*value = ToString(
|
|
|
|
vstorage->GetEstimatedCompressionRatioAtLevel(static_cast<int>(level)));
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleLevelStats(std::string* value, Slice /*suffix*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
char buf[1000];
|
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"Level Files Size(MB)\n"
|
|
|
|
"--------------------\n");
|
|
|
|
value->append(buf);
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
for (int level = 0; level < number_levels_; level++) {
|
|
|
|
snprintf(buf, sizeof(buf), "%3d %8d %8.0f\n", level,
|
|
|
|
vstorage->NumLevelFiles(level),
|
|
|
|
vstorage->NumLevelBytes(level) / kMB);
|
|
|
|
value->append(buf);
|
|
|
|
}
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::HandleStats(std::string* value, Slice suffix) {
|
|
|
|
if (!HandleCFStats(value, suffix)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (!HandleDBStats(value, suffix)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCFMapStats(
|
|
|
|
std::map<std::string, std::string>* cf_stats, Slice /*suffix*/) {
|
|
|
|
DumpCFMapStats(cf_stats);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCFStats(std::string* value, Slice /*suffix*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
DumpCFStats(value);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCFStatsNoFileHistogram(std::string* value,
|
|
|
|
Slice /*suffix*/) {
|
|
|
|
DumpCFStatsNoFileHistogram(value);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCFFileHistogram(std::string* value,
|
|
|
|
Slice /*suffix*/) {
|
|
|
|
DumpCFFileHistogram(value);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleDBStats(std::string* value, Slice /*suffix*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
DumpDBStats(value);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleSsTables(std::string* value, Slice /*suffix*/) {
|
|
|
|
auto* current = cfd_->current();
|
|
|
|
*value = current->DebugString(true, true);
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::HandleAggregatedTableProperties(std::string* value,
|
|
|
|
Slice /*suffix*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
std::shared_ptr<const TableProperties> tp;
|
|
|
|
auto s = cfd_->current()->GetAggregatedTableProperties(&tp);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = tp->ToString();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static std::map<std::string, std::string> MapUint64ValuesToString(
|
|
|
|
const std::map<std::string, uint64_t>& from) {
|
|
|
|
std::map<std::string, std::string> to;
|
|
|
|
for (const auto& e : from) {
|
|
|
|
to[e.first] = ToString(e.second);
|
|
|
|
}
|
|
|
|
return to;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleAggregatedTablePropertiesMap(
|
|
|
|
std::map<std::string, std::string>* values, Slice /*suffix*/) {
|
|
|
|
std::shared_ptr<const TableProperties> tp;
|
|
|
|
auto s = cfd_->current()->GetAggregatedTableProperties(&tp);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*values = MapUint64ValuesToString(tp->GetAggregatablePropertiesAsMap());
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleAggregatedTablePropertiesAtLevel(std::string* values,
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
Slice suffix) {
|
|
|
|
uint64_t level;
|
|
|
|
bool ok = ConsumeDecimalNumber(&suffix, &level) && suffix.empty();
|
|
|
|
if (!ok || static_cast<int>(level) >= number_levels_) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
std::shared_ptr<const TableProperties> tp;
|
|
|
|
auto s = cfd_->current()->GetAggregatedTableProperties(
|
|
|
|
&tp, static_cast<int>(level));
|
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*values = tp->ToString();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleAggregatedTablePropertiesAtLevelMap(
|
|
|
|
std::map<std::string, std::string>* values, Slice suffix) {
|
|
|
|
uint64_t level;
|
|
|
|
bool ok = ConsumeDecimalNumber(&suffix, &level) && suffix.empty();
|
|
|
|
if (!ok || static_cast<int>(level) >= number_levels_) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
std::shared_ptr<const TableProperties> tp;
|
|
|
|
auto s = cfd_->current()->GetAggregatedTableProperties(
|
|
|
|
&tp, static_cast<int>(level));
|
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*values = MapUint64ValuesToString(tp->GetAggregatablePropertiesAsMap());
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumImmutableMemTable(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = cfd_->imm()->NumNotFlushed();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumImmutableMemTableFlushed(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = cfd_->imm()->NumFlushed();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleMemTableFlushPending(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = (cfd_->imm()->IsFlushPending() ? 1 : 0);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumRunningFlushes(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = db->num_running_flushes();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCompactionPending(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// 1 if the system already determines at least one compaction is needed.
|
|
|
|
// 0 otherwise,
|
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = (cfd_->compaction_picker()->NeedsCompaction(vstorage) ? 1 : 0);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::HandleNumRunningCompactions(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = db->num_running_compactions_;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBackgroundErrors(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Accumulated number of errors in background flushes or compactions.
|
|
|
|
*value = GetBackgroundErrorCount();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCurSizeActiveMemTable(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Current size of the active memtable
|
|
|
|
// Using ApproximateMemoryUsageFast to avoid the need for synchronization
|
|
|
|
*value = cfd_->mem()->ApproximateMemoryUsageFast();
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCurSizeAllMemTables(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Current size of the active memtable + immutable memtables
|
|
|
|
// Using ApproximateMemoryUsageFast to avoid the need for synchronization
|
|
|
|
*value = cfd_->mem()->ApproximateMemoryUsageFast() +
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
cfd_->imm()->ApproximateUnflushedMemTablesMemoryUsage();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleSizeAllMemTables(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
// Using ApproximateMemoryUsageFast to avoid the need for synchronization
|
|
|
|
*value = cfd_->mem()->ApproximateMemoryUsageFast() +
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
cfd_->imm()->ApproximateMemoryUsage();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumEntriesActiveMemTable(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Current number of entires in the active memtable
|
|
|
|
*value = cfd_->mem()->num_entries();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumEntriesImmMemTables(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Current number of entries in the immutable memtables
|
|
|
|
*value = cfd_->imm()->current()->GetTotalNumEntries();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumDeletesActiveMemTable(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Current number of entires in the active memtable
|
|
|
|
*value = cfd_->mem()->num_deletes();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumDeletesImmMemTables(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Current number of entries in the immutable memtables
|
|
|
|
*value = cfd_->imm()->current()->GetTotalNumDeletes();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleEstimateNumKeys(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
// Estimate number of entries in the column family:
|
|
|
|
// Use estimated entries in tables + total entries in memtables.
|
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
|
|
|
uint64_t estimate_keys = cfd_->mem()->num_entries() +
|
|
|
|
cfd_->imm()->current()->GetTotalNumEntries() +
|
|
|
|
vstorage->GetEstimatedActiveKeys();
|
|
|
|
uint64_t estimate_deletes =
|
|
|
|
cfd_->mem()->num_deletes() + cfd_->imm()->current()->GetTotalNumDeletes();
|
|
|
|
*value = estimate_keys > estimate_deletes * 2
|
|
|
|
? estimate_keys - (estimate_deletes * 2)
|
|
|
|
: 0;
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumSnapshots(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = db->snapshots().count();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleOldestSnapshotTime(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = static_cast<uint64_t>(db->snapshots().GetOldestSnapshotTime());
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleOldestSnapshotSequence(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
|
|
|
*value = static_cast<uint64_t>(db->snapshots().GetOldestSnapshotSequence());
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleNumLiveVersions(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = cfd_->GetNumLiveVersions();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleCurrentSuperVersionNumber(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
*value = cfd_->GetSuperVersionNumber();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::HandleIsFileDeletionsEnabled(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
First step towards handling MANIFEST write error (#6949)
Summary:
This PR provides preliminary support for handling IO error during MANIFEST write.
File write/sync is not guaranteed to be atomic. If we encounter an IOError while writing/syncing to the MANIFEST file, we cannot be sure about the state of the MANIFEST file. The version edits may or may not have reached the file. During cleanup, if we delete the newly-generated SST files referenced by the pending version edit(s), but the version edit(s) actually are persistent in the MANIFEST, then next recovery attempt will process the version edits(s) and then fail since the SST files have already been deleted.
One approach is to truncate the MANIFEST after write/sync error, so that it is safe to delete the SST files. However, file truncation may not be supported on certain file systems. Therefore, we take the following approach.
If an IOError is detected during MANIFEST write/sync, we disable file deletions for the faulty database. Depending on whether the IOError is retryable (set by underlying file system), either RocksDB or application can call `DB::Resume()`, or simply shutdown and restart. During `Resume()`, RocksDB will try to switch to a new MANIFEST and write all existing in-memory version storage in the new file. If this succeeds, then RocksDB may proceed. If all recovery is completed, then file deletions will be re-enabled.
Note that multiple threads can call `LogAndApply()` at the same time, though only one of them will be going through the process MANIFEST write, possibly batching the version edits of other threads. When the leading MANIFEST writer finishes, all of the MANIFEST writing threads in this batch will have the same IOError. They will all call `ErrorHandler::SetBGError()` in which file deletion will be disabled.
Possible future directions:
- Add an `ErrorContext` structure so that it is easier to pass more info to `ErrorHandler`. Currently, as in this example, a new `BackgroundErrorReason` has to be added.
Test plan (dev server):
make check
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6949
Reviewed By: anand1976
Differential Revision: D22026020
Pulled By: riversand963
fbshipit-source-id: f3c68a2ef45d9b505d0d625c7c5e0c88495b91c8
4 years ago
|
|
|
*value = db->IsFileDeletionsEnabled() ? 1 : 0;
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBaseLevel(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
|
|
|
*value = vstorage->base_level();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleTotalSstFilesSize(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = cfd_->GetTotalSstFilesSize();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleLiveSstFilesSize(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
*value = cfd_->GetLiveSstFilesSize();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
bool InternalStats::HandleEstimatePendingCompactionBytes(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
const auto* vstorage = cfd_->current()->storage_info();
|
|
|
|
*value = vstorage->estimated_compaction_needed_bytes();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleEstimateTableReadersMem(uint64_t* value,
|
|
|
|
DBImpl* /*db*/,
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
Version* version) {
|
|
|
|
*value = (version == nullptr) ? 0 : version->GetMemoryUsageByTableReaders();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleEstimateLiveDataSize(uint64_t* value, DBImpl* /*db*/,
|
Fix VersionStorageInfo::EstimateLiveDataSize seg fault
Summary:
`HandleEstimateLiveDataSize`'s `need_out_of_mutex` is true
https://github.com/facebook/rocksdb/blob/402b7aa07f0e6da4c1f0216ff2b2e50fd2e5eaac/db/internal_stats.cc#L412-L413
so , is will ref a `SuperVersion`
https://github.com/facebook/rocksdb/blob/402b7aa07f0e6da4c1f0216ff2b2e50fd2e5eaac/db/db_impl.cc#L1896-L1908
so , the param `version` of `InternalStats::HandleEstimateLiveDataSize` is safe , but `cfd_->current()` is not safe !
https://github.com/facebook/rocksdb/blob/402b7aa07f0e6da4c1f0216ff2b2e50fd2e5eaac/db/internal_stats.cc#L790-L795
the `cfd_->current()` maybe invalid ...
here's mongo-rocks crash backtrace
```
mongod(mongo::printStackTrace(std::basic_ostream<char, std::char_traits<char> >&)+0x41) [0x7fe3a3137c51]
mongod(+0x2152E89) [0x7fe3a3136e89]
mongod(+0x21534F6) [0x7fe3a31374f6]
libpthread.so.0(+0xF5E0) [0x7fe39f5e45e0]
mongod(rocksdb::InternalKeyComparator::Compare(rocksdb::Slice const&, rocksdb::Slice const&) const+0x17) [0x7fe3a22375a7]
mongod(rocksdb::VersionStorageInfo::EstimateLiveDataSize() const+0x3AA) [0x7fe3a228daba]
mongod(rocksdb::InternalStats::HandleEstimateLiveDataSize(unsigned long*, rocksdb::DBImpl*, rocksdb::Version*)+0x20) [0x7fe3a2250d70]
mongod(rocksdb::DBImpl::GetIntPropertyInternal(rocksdb::ColumnFamilyData*, rocksdb::DBPropertyInfo const&, bool, unsigned long*)+0xEF) [0x7fe3a21e3dbf]
```
Closes https://github.com/facebook/rocksdb/pull/3912
Differential Revision: D8179944
Pulled By: yiwu-arbug
fbshipit-source-id: 26f314a8f98f4c2dc4348745d759f26f0e8d95e1
7 years ago
|
|
|
Version* version) {
|
|
|
|
const auto* vstorage = version->storage_info();
|
Eliminate duplicated property constants
Summary:
Before this diff, there were duplicated constants to refer to properties (user-
facing API had strings and InternalStats had an enum). I noticed these were
inconsistent in terms of which constants are provided, names of constants, and
documentation of constants. Overall it seemed annoying/error-prone to maintain
these duplicated constants.
So, this diff gets rid of InternalStats's constants and replaces them with a map
keyed on the user-facing constant. The value in that map contains a function
pointer to get the property value, so we don't need to do string matching while
holding db->mutex_. This approach has a side benefit of making many small
handler functions rather than a giant switch-statement.
Test Plan: db_properties_test passes, running "make commit-prereq -j32"
Reviewers: sdong, yhchiang, kradhakrishnan, IslamAbdelRahman, rven, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53253
9 years ago
|
|
|
*value = vstorage->EstimateLiveDataSize();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleMinLogNumberToKeep(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
|
|
|
*value = db->MinLogNumberToKeep();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleMinObsoleteSstNumberToKeep(uint64_t* value,
|
|
|
|
DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
|
|
|
*value = db->MinObsoleteSstNumberToKeep();
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleActualDelayedWriteRate(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
|
|
|
const WriteController& wc = db->write_controller();
|
|
|
|
if (!wc.NeedsDelay()) {
|
|
|
|
*value = 0;
|
|
|
|
} else {
|
|
|
|
*value = wc.delayed_write_rate();
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleIsWriteStopped(uint64_t* value, DBImpl* db,
|
|
|
|
Version* /*version*/) {
|
|
|
|
*value = db->write_controller().IsStopped() ? 1 : 0;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleEstimateOldestKeyTime(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
// TODO(yiwu): The property is currently available for fifo compaction
|
|
|
|
// with allow_compaction = false. This is because we don't propagate
|
|
|
|
// oldest_key_time on compaction.
|
|
|
|
if (cfd_->ioptions()->compaction_style != kCompactionStyleFIFO ||
|
|
|
|
cfd_->GetCurrentMutableCFOptions()
|
|
|
|
->compaction_options_fifo.allow_compaction) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
TablePropertiesCollection collection;
|
|
|
|
auto s = cfd_->current()->GetPropertiesOfAllTables(&collection);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*value = std::numeric_limits<uint64_t>::max();
|
|
|
|
for (auto& p : collection) {
|
|
|
|
*value = std::min(*value, p.second->oldest_key_time);
|
|
|
|
if (*value == 0) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (*value > 0) {
|
|
|
|
*value = std::min({cfd_->mem()->ApproximateOldestKeyTime(),
|
|
|
|
cfd_->imm()->ApproximateOldestKeyTime(), *value});
|
|
|
|
}
|
|
|
|
return *value > 0 && *value < std::numeric_limits<uint64_t>::max();
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBlockCacheStat(Cache** block_cache) {
|
|
|
|
assert(block_cache != nullptr);
|
|
|
|
auto* table_factory = cfd_->ioptions()->table_factory.get();
|
|
|
|
assert(table_factory != nullptr);
|
|
|
|
*block_cache =
|
|
|
|
table_factory->GetOptions<Cache>(TableFactory::kBlockCacheOpts());
|
|
|
|
return *block_cache != nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBlockCacheCapacity(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
Cache* block_cache;
|
|
|
|
bool ok = HandleBlockCacheStat(&block_cache);
|
|
|
|
if (!ok) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*value = static_cast<uint64_t>(block_cache->GetCapacity());
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBlockCacheUsage(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
Cache* block_cache;
|
|
|
|
bool ok = HandleBlockCacheStat(&block_cache);
|
|
|
|
if (!ok) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*value = static_cast<uint64_t>(block_cache->GetUsage());
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool InternalStats::HandleBlockCachePinnedUsage(uint64_t* value, DBImpl* /*db*/,
|
|
|
|
Version* /*version*/) {
|
|
|
|
Cache* block_cache;
|
|
|
|
bool ok = HandleBlockCacheStat(&block_cache);
|
|
|
|
if (!ok) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
*value = static_cast<uint64_t>(block_cache->GetPinnedUsage());
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
void InternalStats::DumpDBStats(std::string* value) {
|
|
|
|
char buf[1000];
|
|
|
|
// DB-level stats, only available from default column family
|
|
|
|
double seconds_up = (clock_->NowMicros() - started_at_ + 1) / kMicrosInSec;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
double interval_seconds_up = seconds_up - db_stats_snapshot_.seconds_up;
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"\n** DB Stats **\nUptime(secs): %.1f total, %.1f interval\n",
|
|
|
|
seconds_up, interval_seconds_up);
|
|
|
|
value->append(buf);
|
|
|
|
// Cumulative
|
|
|
|
uint64_t user_bytes_written =
|
|
|
|
GetDBStats(InternalStats::kIntStatsBytesWritten);
|
|
|
|
uint64_t num_keys_written =
|
|
|
|
GetDBStats(InternalStats::kIntStatsNumKeysWritten);
|
|
|
|
uint64_t write_other = GetDBStats(InternalStats::kIntStatsWriteDoneByOther);
|
|
|
|
uint64_t write_self = GetDBStats(InternalStats::kIntStatsWriteDoneBySelf);
|
|
|
|
uint64_t wal_bytes = GetDBStats(InternalStats::kIntStatsWalFileBytes);
|
|
|
|
uint64_t wal_synced = GetDBStats(InternalStats::kIntStatsWalFileSynced);
|
|
|
|
uint64_t write_with_wal = GetDBStats(InternalStats::kIntStatsWriteWithWal);
|
|
|
|
uint64_t write_stall_micros =
|
|
|
|
GetDBStats(InternalStats::kIntStatsWriteStallMicros);
|
|
|
|
|
Fix a bug in stall time counter. Improve its output format.
Summary: Fix a bug in stall time counter. Improve its output format.
Test Plan:
export ROCKSDB_TESTS=Timeout
./db_test
./db_bench --benchmarks=fillrandom --stats_interval=10000 --statistics=true --stats_per_interval=1 --num=1000000 --threads=4 --level0_stop_writes_trigger=3 --level0_slowdown_writes_trigger=2
sample output:
Uptime(secs): 35.8 total, 0.0 interval
Cumulative writes: 359590 writes, 359589 keys, 183047 batches, 2.0 writes per batch, 0.04 GB user ingest, stall seconds: 1786.008 ms
Cumulative WAL: 359591 writes, 183046 syncs, 1.96 writes per sync, 0.04 GB written
Interval writes: 253 writes, 253 keys, 128 batches, 2.0 writes per batch, 0.0 MB user ingest, stall time: 0 us
Interval WAL: 253 writes, 128 syncs, 1.96 writes per sync, 0.00 MB written
Reviewers: MarkCallaghan, igor, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D34275
10 years ago
|
|
|
const int kHumanMicrosLen = 32;
|
|
|
|
char human_micros[kHumanMicrosLen];
|
|
|
|
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
// Data
|
|
|
|
// writes: total number of write requests.
|
|
|
|
// keys: total number of key updates issued by all the write requests
|
|
|
|
// commit groups: number of group commits issued to the DB. Each group can
|
|
|
|
// contain one or more writes.
|
|
|
|
// so writes/keys is the average number of put in multi-put or put
|
|
|
|
// writes/groups is the average group commit size.
|
|
|
|
//
|
|
|
|
// The format is the same for interval stats.
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"Cumulative writes: %s writes, %s keys, %s commit groups, "
|
|
|
|
"%.1f writes per commit group, ingest: %.2f GB, %.2f MB/s\n",
|
|
|
|
NumberToHumanString(write_other + write_self).c_str(),
|
|
|
|
NumberToHumanString(num_keys_written).c_str(),
|
|
|
|
NumberToHumanString(write_self).c_str(),
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
(write_other + write_self) / static_cast<double>(write_self + 1),
|
|
|
|
user_bytes_written / kGB, user_bytes_written / kMB / seconds_up);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
value->append(buf);
|
|
|
|
// WAL
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"Cumulative WAL: %s writes, %s syncs, "
|
|
|
|
"%.2f writes per sync, written: %.2f GB, %.2f MB/s\n",
|
|
|
|
NumberToHumanString(write_with_wal).c_str(),
|
|
|
|
NumberToHumanString(wal_synced).c_str(),
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
write_with_wal / static_cast<double>(wal_synced + 1),
|
|
|
|
wal_bytes / kGB, wal_bytes / kMB / seconds_up);
|
|
|
|
value->append(buf);
|
|
|
|
// Stall
|
|
|
|
AppendHumanMicros(write_stall_micros, human_micros, kHumanMicrosLen, true);
|
|
|
|
snprintf(buf, sizeof(buf), "Cumulative stall: %s, %.1f percent\n",
|
|
|
|
human_micros,
|
|
|
|
// 10000 = divide by 1M to get secs, then multiply by 100 for pct
|
|
|
|
write_stall_micros / 10000.0 / std::max(seconds_up, 0.001));
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
// Interval
|
|
|
|
uint64_t interval_write_other = write_other - db_stats_snapshot_.write_other;
|
|
|
|
uint64_t interval_write_self = write_self - db_stats_snapshot_.write_self;
|
|
|
|
uint64_t interval_num_keys_written =
|
|
|
|
num_keys_written - db_stats_snapshot_.num_keys_written;
|
|
|
|
snprintf(
|
|
|
|
buf, sizeof(buf),
|
|
|
|
"Interval writes: %s writes, %s keys, %s commit groups, "
|
|
|
|
"%.1f writes per commit group, ingest: %.2f MB, %.2f MB/s\n",
|
|
|
|
NumberToHumanString(interval_write_other + interval_write_self).c_str(),
|
|
|
|
NumberToHumanString(interval_num_keys_written).c_str(),
|
|
|
|
NumberToHumanString(interval_write_self).c_str(),
|
|
|
|
static_cast<double>(interval_write_other + interval_write_self) /
|
|
|
|
(interval_write_self + 1),
|
|
|
|
(user_bytes_written - db_stats_snapshot_.ingest_bytes) / kMB,
|
|
|
|
(user_bytes_written - db_stats_snapshot_.ingest_bytes) / kMB /
|
|
|
|
std::max(interval_seconds_up, 0.001)),
|
|
|
|
value->append(buf);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
|
|
|
|
uint64_t interval_write_with_wal =
|
|
|
|
write_with_wal - db_stats_snapshot_.write_with_wal;
|
|
|
|
uint64_t interval_wal_synced = wal_synced - db_stats_snapshot_.wal_synced;
|
|
|
|
uint64_t interval_wal_bytes = wal_bytes - db_stats_snapshot_.wal_bytes;
|
|
|
|
|
|
|
|
snprintf(
|
|
|
|
buf, sizeof(buf),
|
|
|
|
"Interval WAL: %s writes, %s syncs, "
|
Fix "Interval WAL" bytes to say GB instead of MB (#8350)
Summary:
Reference: https://github.com/facebook/rocksdb/issues/7201
Before fix:
`/tmp/rocksdb_test_file/LOG.old.1622492586055679:Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s`
After fix:
`/tmp/rocksdb_test_file/LOG:Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s`
Tests:
```
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 0 AVG: 0.05s local:0/7720/100%/0.0s
rm -rf /dev/shm/rocksdb.CLRh
/usr/bin/python3 tools/check_all_python.py
No syntax errors in 34 .py files
/usr/bin/python3 tools/ldb_test.py
Running testCheckConsistency...
.Running testColumnFamilies...
.Running testCountDelimDump...
.Running testCountDelimIDump...
.Running testDumpLiveFiles...
.Running testDumpLoad...
Warning: 7 bad lines ignored.
.Running testGetProperty...
.Running testHexPutGet...
.Running testIDumpBasics...
.Running testIngestExternalSst...
.Running testInvalidCmdLines...
.Running testListColumnFamilies...
.Running testManifestDump...
.Running testMiscAdminTask...
Sequence,Count,ByteSize,Physical Offset,Key(s)
.Running testSSTDump...
.Running testSimpleStringPutGet...
.Running testStringBatchPut...
.Running testTtlPutGet...
.Running testWALDump...
.
----------------------------------------------------------------------
Ran 19 tests in 15.945s
OK
sh tools/rocksdb_dump_test.sh
make check-format
make[1]: Entering directory '/home/piydatta/Documents/rocksdb'
$DEBUG_LEVEL is 1
Makefile:176: Warning: Compiling in debug mode. Don't use the resulting binary in production
build_tools/format-diff.sh -c
Checking format of uncommitted changes...
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8350
Reviewed By: jay-zhuang
Differential Revision: D28790567
Pulled By: zhichao-cao
fbshipit-source-id: dcb1e4c124361156435122f21f0a288335b2c8c8
3 years ago
|
|
|
"%.2f writes per sync, written: %.2f GB, %.2f MB/s\n",
|
|
|
|
NumberToHumanString(interval_write_with_wal).c_str(),
|
|
|
|
NumberToHumanString(interval_wal_synced).c_str(),
|
|
|
|
interval_write_with_wal / static_cast<double>(interval_wal_synced + 1),
|
|
|
|
interval_wal_bytes / kGB,
|
|
|
|
interval_wal_bytes / kMB / std::max(interval_seconds_up, 0.001));
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
// Stall
|
|
|
|
AppendHumanMicros(write_stall_micros - db_stats_snapshot_.write_stall_micros,
|
|
|
|
human_micros, kHumanMicrosLen, true);
|
|
|
|
snprintf(buf, sizeof(buf), "Interval stall: %s, %.1f percent\n", human_micros,
|
|
|
|
// 10000 = divide by 1M to get secs, then multiply by 100 for pct
|
|
|
|
(write_stall_micros - db_stats_snapshot_.write_stall_micros) /
|
|
|
|
10000.0 / std::max(interval_seconds_up, 0.001));
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
db_stats_snapshot_.seconds_up = seconds_up;
|
|
|
|
db_stats_snapshot_.ingest_bytes = user_bytes_written;
|
|
|
|
db_stats_snapshot_.write_other = write_other;
|
|
|
|
db_stats_snapshot_.write_self = write_self;
|
|
|
|
db_stats_snapshot_.num_keys_written = num_keys_written;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
db_stats_snapshot_.wal_bytes = wal_bytes;
|
|
|
|
db_stats_snapshot_.wal_synced = wal_synced;
|
|
|
|
db_stats_snapshot_.write_with_wal = write_with_wal;
|
DB Stats Dump to print total stall time
Summary:
Add printing of stall time in DB Stats:
Sample outputs:
** DB Stats **
Uptime(secs): 53.2 total, 1.7 interval
Cumulative writes: 625940 writes, 625939 keys, 625940 batches, 1.0 writes per batch, 0.49 GB user ingest, stall micros: 50691070
Cumulative WAL: 625940 writes, 625939 syncs, 1.00 writes per sync, 0.49 GB written
Interval writes: 10859 writes, 10859 keys, 10859 batches, 1.0 writes per batch, 8.7 MB user ingest, stall micros: 1692319
Interval WAL: 10859 writes, 10859 syncs, 1.00 writes per sync, 0.01 MB written
Test Plan:
make all check
verify printing using db_bench
Reviewers: igor, yhchiang, rven, MarkCallaghan
Reviewed By: MarkCallaghan
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D31239
10 years ago
|
|
|
db_stats_snapshot_.write_stall_micros = write_stall_micros;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Dump Compaction Level stats to a map of stat name with "compaction." prefix
|
|
|
|
* to value in double as string. The level in stat name is represented with
|
|
|
|
* a prefix "Lx" where "x" is the level number. A special level "Sum"
|
|
|
|
* represents the sum of a stat for all levels.
|
|
|
|
* The result also contains IO stall counters which keys start with "io_stalls."
|
|
|
|
* and values represent uint64 encoded as strings.
|
|
|
|
*/
|
|
|
|
void InternalStats::DumpCFMapStats(
|
|
|
|
std::map<std::string, std::string>* cf_stats) {
|
|
|
|
const VersionStorageInfo* vstorage = cfd_->current()->storage_info();
|
|
|
|
CompactionStats compaction_stats_sum;
|
|
|
|
std::map<int, std::map<LevelStatType, double>> levels_stats;
|
|
|
|
DumpCFMapStats(vstorage, &levels_stats, &compaction_stats_sum);
|
|
|
|
for (auto const& level_ent : levels_stats) {
|
|
|
|
auto level_str =
|
|
|
|
level_ent.first == -1 ? "Sum" : "L" + ToString(level_ent.first);
|
|
|
|
for (auto const& stat_ent : level_ent.second) {
|
|
|
|
auto stat_type = stat_ent.first;
|
|
|
|
auto key_str =
|
|
|
|
"compaction." + level_str + "." +
|
|
|
|
InternalStats::compaction_level_stats.at(stat_type).property_name;
|
|
|
|
(*cf_stats)[key_str] = std::to_string(stat_ent.second);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
DumpCFMapStatsIOStalls(cf_stats);
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::DumpCFMapStats(
|
|
|
|
const VersionStorageInfo* vstorage,
|
|
|
|
std::map<int, std::map<LevelStatType, double>>* levels_stats,
|
|
|
|
CompactionStats* compaction_stats_sum) {
|
|
|
|
assert(vstorage);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
|
|
|
|
int num_levels_to_check =
|
|
|
|
(cfd_->ioptions()->compaction_style != kCompactionStyleFIFO)
|
|
|
|
? vstorage->num_levels() - 1
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
: 1;
|
|
|
|
|
|
|
|
// Compaction scores are sorted based on its value. Restore them to the
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
// level order
|
|
|
|
std::vector<double> compaction_score(number_levels_, 0);
|
|
|
|
for (int i = 0; i < num_levels_to_check; ++i) {
|
|
|
|
compaction_score[vstorage->CompactionScoreLevel(i)] =
|
|
|
|
vstorage->CompactionScore(i);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
}
|
|
|
|
// Count # of files being compacted for each level
|
|
|
|
std::vector<int> files_being_compacted(number_levels_, 0);
|
|
|
|
for (int level = 0; level < number_levels_; ++level) {
|
|
|
|
for (auto* f : vstorage->LevelFiles(level)) {
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
if (f->being_compacted) {
|
|
|
|
++files_being_compacted[level];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int total_files = 0;
|
|
|
|
int total_files_being_compacted = 0;
|
|
|
|
double total_file_size = 0;
|
|
|
|
uint64_t flush_ingest = cf_stats_value_[BYTES_FLUSHED];
|
|
|
|
uint64_t add_file_ingest = cf_stats_value_[BYTES_INGESTED_ADD_FILE];
|
|
|
|
uint64_t curr_ingest = flush_ingest + add_file_ingest;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
for (int level = 0; level < number_levels_; level++) {
|
|
|
|
int files = vstorage->NumLevelFiles(level);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
total_files += files;
|
|
|
|
total_files_being_compacted += files_being_compacted[level];
|
|
|
|
if (comp_stats_[level].micros > 0 || files > 0) {
|
|
|
|
compaction_stats_sum->Add(comp_stats_[level]);
|
|
|
|
total_file_size += vstorage->NumLevelBytes(level);
|
|
|
|
uint64_t input_bytes;
|
|
|
|
if (level == 0) {
|
|
|
|
input_bytes = curr_ingest;
|
|
|
|
} else {
|
|
|
|
input_bytes = comp_stats_[level].bytes_read_non_output_levels +
|
|
|
|
comp_stats_[level].bytes_read_blob;
|
|
|
|
}
|
|
|
|
double w_amp =
|
|
|
|
(input_bytes == 0)
|
|
|
|
? 0.0
|
|
|
|
: static_cast<double>(comp_stats_[level].bytes_written +
|
|
|
|
comp_stats_[level].bytes_written_blob) /
|
|
|
|
input_bytes;
|
|
|
|
std::map<LevelStatType, double> level_stats;
|
|
|
|
PrepareLevelStats(&level_stats, files, files_being_compacted[level],
|
|
|
|
static_cast<double>(vstorage->NumLevelBytes(level)),
|
|
|
|
compaction_score[level], w_amp, comp_stats_[level]);
|
|
|
|
(*levels_stats)[level] = level_stats;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
}
|
|
|
|
}
|
|
|
|
// Cumulative summary
|
|
|
|
double w_amp = (compaction_stats_sum->bytes_written +
|
|
|
|
compaction_stats_sum->bytes_written_blob) /
|
|
|
|
static_cast<double>(curr_ingest + 1);
|
|
|
|
// Stats summary across levels
|
|
|
|
std::map<LevelStatType, double> sum_stats;
|
|
|
|
PrepareLevelStats(&sum_stats, total_files, total_files_being_compacted,
|
|
|
|
total_file_size, 0, w_amp, *compaction_stats_sum);
|
|
|
|
(*levels_stats)[-1] = sum_stats; // -1 is for the Sum level
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::DumpCFMapStatsByPriority(
|
|
|
|
std::map<int, std::map<LevelStatType, double>>* priorities_stats) {
|
|
|
|
for (size_t priority = 0; priority < comp_stats_by_pri_.size(); priority++) {
|
|
|
|
if (comp_stats_by_pri_[priority].micros > 0) {
|
|
|
|
std::map<LevelStatType, double> priority_stats;
|
|
|
|
PrepareLevelStats(&priority_stats, 0 /* num_files */,
|
|
|
|
0 /* being_compacted */, 0 /* total_file_size */,
|
|
|
|
0 /* compaction_score */, 0 /* w_amp */,
|
|
|
|
comp_stats_by_pri_[priority]);
|
|
|
|
(*priorities_stats)[static_cast<int>(priority)] = priority_stats;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::DumpCFMapStatsIOStalls(
|
|
|
|
std::map<std::string, std::string>* cf_stats) {
|
|
|
|
(*cf_stats)["io_stalls.level0_slowdown"] =
|
|
|
|
std::to_string(cf_stats_count_[L0_FILE_COUNT_LIMIT_SLOWDOWNS]);
|
|
|
|
(*cf_stats)["io_stalls.level0_slowdown_with_compaction"] =
|
|
|
|
std::to_string(cf_stats_count_[LOCKED_L0_FILE_COUNT_LIMIT_SLOWDOWNS]);
|
|
|
|
(*cf_stats)["io_stalls.level0_numfiles"] =
|
|
|
|
std::to_string(cf_stats_count_[L0_FILE_COUNT_LIMIT_STOPS]);
|
|
|
|
(*cf_stats)["io_stalls.level0_numfiles_with_compaction"] =
|
|
|
|
std::to_string(cf_stats_count_[LOCKED_L0_FILE_COUNT_LIMIT_STOPS]);
|
|
|
|
(*cf_stats)["io_stalls.stop_for_pending_compaction_bytes"] =
|
|
|
|
std::to_string(cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_STOPS]);
|
|
|
|
(*cf_stats)["io_stalls.slowdown_for_pending_compaction_bytes"] =
|
|
|
|
std::to_string(cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_SLOWDOWNS]);
|
|
|
|
(*cf_stats)["io_stalls.memtable_compaction"] =
|
|
|
|
std::to_string(cf_stats_count_[MEMTABLE_LIMIT_STOPS]);
|
|
|
|
(*cf_stats)["io_stalls.memtable_slowdown"] =
|
|
|
|
std::to_string(cf_stats_count_[MEMTABLE_LIMIT_SLOWDOWNS]);
|
|
|
|
|
|
|
|
uint64_t total_stop = cf_stats_count_[L0_FILE_COUNT_LIMIT_STOPS] +
|
|
|
|
cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_STOPS] +
|
|
|
|
cf_stats_count_[MEMTABLE_LIMIT_STOPS];
|
|
|
|
|
|
|
|
uint64_t total_slowdown =
|
|
|
|
cf_stats_count_[L0_FILE_COUNT_LIMIT_SLOWDOWNS] +
|
|
|
|
cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_SLOWDOWNS] +
|
|
|
|
cf_stats_count_[MEMTABLE_LIMIT_SLOWDOWNS];
|
|
|
|
|
|
|
|
(*cf_stats)["io_stalls.total_stop"] = std::to_string(total_stop);
|
|
|
|
(*cf_stats)["io_stalls.total_slowdown"] = std::to_string(total_slowdown);
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::DumpCFStats(std::string* value) {
|
|
|
|
DumpCFStatsNoFileHistogram(value);
|
|
|
|
DumpCFFileHistogram(value);
|
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::DumpCFStatsNoFileHistogram(std::string* value) {
|
|
|
|
char buf[2000];
|
|
|
|
// Per-ColumnFamily stats
|
|
|
|
PrintLevelStatsHeader(buf, sizeof(buf), cfd_->GetName(), "Level");
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
// Print stats for each level
|
|
|
|
const VersionStorageInfo* vstorage = cfd_->current()->storage_info();
|
|
|
|
std::map<int, std::map<LevelStatType, double>> levels_stats;
|
|
|
|
CompactionStats compaction_stats_sum;
|
|
|
|
DumpCFMapStats(vstorage, &levels_stats, &compaction_stats_sum);
|
|
|
|
for (int l = 0; l < number_levels_; ++l) {
|
|
|
|
if (levels_stats.find(l) != levels_stats.end()) {
|
|
|
|
PrintLevelStats(buf, sizeof(buf), "L" + ToString(l), levels_stats[l]);
|
|
|
|
value->append(buf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Print sum of level stats
|
|
|
|
PrintLevelStats(buf, sizeof(buf), "Sum", levels_stats[-1]);
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
uint64_t flush_ingest = cf_stats_value_[BYTES_FLUSHED];
|
|
|
|
uint64_t add_file_ingest = cf_stats_value_[BYTES_INGESTED_ADD_FILE];
|
|
|
|
uint64_t ingest_files_addfile = cf_stats_value_[INGESTED_NUM_FILES_TOTAL];
|
|
|
|
uint64_t ingest_l0_files_addfile =
|
|
|
|
cf_stats_value_[INGESTED_LEVEL0_NUM_FILES_TOTAL];
|
|
|
|
uint64_t ingest_keys_addfile = cf_stats_value_[INGESTED_NUM_KEYS_TOTAL];
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
// Cumulative summary
|
|
|
|
uint64_t total_stall_count =
|
|
|
|
cf_stats_count_[L0_FILE_COUNT_LIMIT_SLOWDOWNS] +
|
|
|
|
cf_stats_count_[L0_FILE_COUNT_LIMIT_STOPS] +
|
|
|
|
cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_SLOWDOWNS] +
|
|
|
|
cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_STOPS] +
|
|
|
|
cf_stats_count_[MEMTABLE_LIMIT_STOPS] +
|
|
|
|
cf_stats_count_[MEMTABLE_LIMIT_SLOWDOWNS];
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
// Interval summary
|
|
|
|
uint64_t interval_flush_ingest =
|
|
|
|
flush_ingest - cf_stats_snapshot_.ingest_bytes_flush;
|
|
|
|
uint64_t interval_add_file_inget =
|
|
|
|
add_file_ingest - cf_stats_snapshot_.ingest_bytes_addfile;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
uint64_t interval_ingest =
|
|
|
|
interval_flush_ingest + interval_add_file_inget + 1;
|
|
|
|
CompactionStats interval_stats(compaction_stats_sum);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
interval_stats.Subtract(cf_stats_snapshot_.comp_stats);
|
|
|
|
double w_amp =
|
|
|
|
(interval_stats.bytes_written + interval_stats.bytes_written_blob) /
|
|
|
|
static_cast<double>(interval_ingest);
|
|
|
|
PrintLevelStats(buf, sizeof(buf), "Int", 0, 0, 0, 0, w_amp, interval_stats);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
PrintLevelStatsHeader(buf, sizeof(buf), cfd_->GetName(), "Priority");
|
|
|
|
value->append(buf);
|
|
|
|
std::map<int, std::map<LevelStatType, double>> priorities_stats;
|
|
|
|
DumpCFMapStatsByPriority(&priorities_stats);
|
|
|
|
for (size_t priority = 0; priority < comp_stats_by_pri_.size(); ++priority) {
|
|
|
|
if (priorities_stats.find(static_cast<int>(priority)) !=
|
|
|
|
priorities_stats.end()) {
|
|
|
|
PrintLevelStats(
|
|
|
|
buf, sizeof(buf),
|
|
|
|
Env::PriorityToString(static_cast<Env::Priority>(priority)),
|
|
|
|
priorities_stats[static_cast<int>(priority)]);
|
|
|
|
value->append(buf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"\nBlob file count: %" ROCKSDB_PRIszt ", total size: %.1f GB\n\n",
|
|
|
|
vstorage->GetBlobFiles().size(),
|
|
|
|
vstorage->GetTotalBlobFileSize() / kGB);
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
double seconds_up = (clock_->NowMicros() - started_at_ + 1) / kMicrosInSec;
|
|
|
|
double interval_seconds_up = seconds_up - cf_stats_snapshot_.seconds_up;
|
|
|
|
snprintf(buf, sizeof(buf), "Uptime(secs): %.1f total, %.1f interval\n",
|
|
|
|
seconds_up, interval_seconds_up);
|
|
|
|
value->append(buf);
|
|
|
|
snprintf(buf, sizeof(buf), "Flush(GB): cumulative %.3f, interval %.3f\n",
|
|
|
|
flush_ingest / kGB, interval_flush_ingest / kGB);
|
|
|
|
value->append(buf);
|
|
|
|
snprintf(buf, sizeof(buf), "AddFile(GB): cumulative %.3f, interval %.3f\n",
|
|
|
|
add_file_ingest / kGB, interval_add_file_inget / kGB);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
uint64_t interval_ingest_files_addfile =
|
|
|
|
ingest_files_addfile - cf_stats_snapshot_.ingest_files_addfile;
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"AddFile(Total Files): cumulative %" PRIu64 ", interval %" PRIu64
|
|
|
|
"\n",
|
|
|
|
ingest_files_addfile, interval_ingest_files_addfile);
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
uint64_t interval_ingest_l0_files_addfile =
|
|
|
|
ingest_l0_files_addfile - cf_stats_snapshot_.ingest_l0_files_addfile;
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"AddFile(L0 Files): cumulative %" PRIu64 ", interval %" PRIu64 "\n",
|
|
|
|
ingest_l0_files_addfile, interval_ingest_l0_files_addfile);
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
uint64_t interval_ingest_keys_addfile =
|
|
|
|
ingest_keys_addfile - cf_stats_snapshot_.ingest_keys_addfile;
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"AddFile(Keys): cumulative %" PRIu64 ", interval %" PRIu64 "\n",
|
|
|
|
ingest_keys_addfile, interval_ingest_keys_addfile);
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
// Compact
|
|
|
|
uint64_t compact_bytes_read = 0;
|
|
|
|
uint64_t compact_bytes_write = 0;
|
|
|
|
uint64_t compact_micros = 0;
|
|
|
|
for (int level = 0; level < number_levels_; level++) {
|
|
|
|
compact_bytes_read += comp_stats_[level].bytes_read_output_level +
|
|
|
|
comp_stats_[level].bytes_read_non_output_levels +
|
|
|
|
comp_stats_[level].bytes_read_blob;
|
|
|
|
compact_bytes_write += comp_stats_[level].bytes_written +
|
|
|
|
comp_stats_[level].bytes_written_blob;
|
|
|
|
compact_micros += comp_stats_[level].micros;
|
|
|
|
}
|
|
|
|
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"Cumulative compaction: %.2f GB write, %.2f MB/s write, "
|
|
|
|
"%.2f GB read, %.2f MB/s read, %.1f seconds\n",
|
|
|
|
compact_bytes_write / kGB, compact_bytes_write / kMB / seconds_up,
|
|
|
|
compact_bytes_read / kGB, compact_bytes_read / kMB / seconds_up,
|
|
|
|
compact_micros / kMicrosInSec);
|
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
// Compaction interval
|
|
|
|
uint64_t interval_compact_bytes_write =
|
|
|
|
compact_bytes_write - cf_stats_snapshot_.compact_bytes_write;
|
|
|
|
uint64_t interval_compact_bytes_read =
|
|
|
|
compact_bytes_read - cf_stats_snapshot_.compact_bytes_read;
|
|
|
|
uint64_t interval_compact_micros =
|
|
|
|
compact_micros - cf_stats_snapshot_.compact_micros;
|
|
|
|
|
|
|
|
snprintf(
|
|
|
|
buf, sizeof(buf),
|
|
|
|
"Interval compaction: %.2f GB write, %.2f MB/s write, "
|
|
|
|
"%.2f GB read, %.2f MB/s read, %.1f seconds\n",
|
|
|
|
interval_compact_bytes_write / kGB,
|
|
|
|
interval_compact_bytes_write / kMB / std::max(interval_seconds_up, 0.001),
|
|
|
|
interval_compact_bytes_read / kGB,
|
|
|
|
interval_compact_bytes_read / kMB / std::max(interval_seconds_up, 0.001),
|
|
|
|
interval_compact_micros / kMicrosInSec);
|
|
|
|
value->append(buf);
|
|
|
|
cf_stats_snapshot_.compact_bytes_write = compact_bytes_write;
|
|
|
|
cf_stats_snapshot_.compact_bytes_read = compact_bytes_read;
|
|
|
|
cf_stats_snapshot_.compact_micros = compact_micros;
|
|
|
|
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"Stalls(count): %" PRIu64
|
|
|
|
" level0_slowdown, "
|
|
|
|
"%" PRIu64
|
|
|
|
" level0_slowdown_with_compaction, "
|
|
|
|
"%" PRIu64
|
|
|
|
" level0_numfiles, "
|
|
|
|
"%" PRIu64
|
|
|
|
" level0_numfiles_with_compaction, "
|
|
|
|
"%" PRIu64
|
|
|
|
" stop for pending_compaction_bytes, "
|
|
|
|
"%" PRIu64
|
|
|
|
" slowdown for pending_compaction_bytes, "
|
|
|
|
"%" PRIu64
|
|
|
|
" memtable_compaction, "
|
|
|
|
"%" PRIu64
|
|
|
|
" memtable_slowdown, "
|
|
|
|
"interval %" PRIu64 " total count\n",
|
|
|
|
cf_stats_count_[L0_FILE_COUNT_LIMIT_SLOWDOWNS],
|
|
|
|
cf_stats_count_[LOCKED_L0_FILE_COUNT_LIMIT_SLOWDOWNS],
|
|
|
|
cf_stats_count_[L0_FILE_COUNT_LIMIT_STOPS],
|
|
|
|
cf_stats_count_[LOCKED_L0_FILE_COUNT_LIMIT_STOPS],
|
|
|
|
cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_STOPS],
|
|
|
|
cf_stats_count_[PENDING_COMPACTION_BYTES_LIMIT_SLOWDOWNS],
|
|
|
|
cf_stats_count_[MEMTABLE_LIMIT_STOPS],
|
|
|
|
cf_stats_count_[MEMTABLE_LIMIT_SLOWDOWNS],
|
|
|
|
total_stall_count - cf_stats_snapshot_.stall_count);
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
value->append(buf);
|
|
|
|
|
|
|
|
cf_stats_snapshot_.seconds_up = seconds_up;
|
|
|
|
cf_stats_snapshot_.ingest_bytes_flush = flush_ingest;
|
|
|
|
cf_stats_snapshot_.ingest_bytes_addfile = add_file_ingest;
|
|
|
|
cf_stats_snapshot_.ingest_files_addfile = ingest_files_addfile;
|
|
|
|
cf_stats_snapshot_.ingest_l0_files_addfile = ingest_l0_files_addfile;
|
|
|
|
cf_stats_snapshot_.ingest_keys_addfile = ingest_keys_addfile;
|
|
|
|
cf_stats_snapshot_.comp_stats = compaction_stats_sum;
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
cf_stats_snapshot_.stall_count = total_stall_count;
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
|
|
|
|
// Always treat CFStats context as "background"
|
|
|
|
Status s = CollectCacheEntryStats(/*foreground=*/false);
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
if (s.ok()) {
|
|
|
|
value->append(cache_entry_stats_.ToString(clock_));
|
Use deleters to label cache entries and collect stats (#8297)
Summary:
This change gathers and publishes statistics about the
kinds of items in block cache. This is especially important for
profiling relative usage of cache by index vs. filter vs. data blocks.
It works by iterating over the cache during periodic stats dump
(InternalStats, stats_dump_period_sec) or on demand when
DB::Get(Map)Property(kBlockCacheEntryStats), except that for
efficiency and sharing among column families, saved data from
the last scan is used when the data is not considered too old.
The new information can be seen in info LOG, for example:
Block cache LRUCache@0x7fca62229330 capacity: 95.37 MB collections: 8 last_copies: 0 last_secs: 0.00178 secs_since: 0
Block cache entry stats(count,size,portion): DataBlock(7092,28.24 MB,29.6136%) FilterBlock(215,867.90 KB,0.888728%) FilterMetaBlock(2,5.31 KB,0.00544%) IndexBlock(217,180.11 KB,0.184432%) WriteBuffer(1,256.00 KB,0.262144%) Misc(1,0.00 KB,0%)
And also through DB::GetProperty and GetMapProperty (here using
ldb just for demonstration):
$ ./ldb --db=/dev/shm/dbbench/ get_property rocksdb.block-cache-entry-stats
rocksdb.block-cache-entry-stats.bytes.data-block: 0
rocksdb.block-cache-entry-stats.bytes.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-block: 0
rocksdb.block-cache-entry-stats.bytes.filter-meta-block: 0
rocksdb.block-cache-entry-stats.bytes.index-block: 178992
rocksdb.block-cache-entry-stats.bytes.misc: 0
rocksdb.block-cache-entry-stats.bytes.other-block: 0
rocksdb.block-cache-entry-stats.bytes.write-buffer: 0
rocksdb.block-cache-entry-stats.capacity: 8388608
rocksdb.block-cache-entry-stats.count.data-block: 0
rocksdb.block-cache-entry-stats.count.deprecated-filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-block: 0
rocksdb.block-cache-entry-stats.count.filter-meta-block: 0
rocksdb.block-cache-entry-stats.count.index-block: 215
rocksdb.block-cache-entry-stats.count.misc: 1
rocksdb.block-cache-entry-stats.count.other-block: 0
rocksdb.block-cache-entry-stats.count.write-buffer: 0
rocksdb.block-cache-entry-stats.id: LRUCache@0x7f3636661290
rocksdb.block-cache-entry-stats.percent.data-block: 0.000000
rocksdb.block-cache-entry-stats.percent.deprecated-filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-block: 0.000000
rocksdb.block-cache-entry-stats.percent.filter-meta-block: 0.000000
rocksdb.block-cache-entry-stats.percent.index-block: 2.133751
rocksdb.block-cache-entry-stats.percent.misc: 0.000000
rocksdb.block-cache-entry-stats.percent.other-block: 0.000000
rocksdb.block-cache-entry-stats.percent.write-buffer: 0.000000
rocksdb.block-cache-entry-stats.secs_for_last_collection: 0.000052
rocksdb.block-cache-entry-stats.secs_since_last_collection: 0
Solution detail - We need some way to flag what kind of blocks each
entry belongs to, preferably without changing the Cache API.
One of the complications is that Cache is a general interface that could
have other users that don't adhere to whichever convention we decide
on for keys and values. Or we would pay for an extra field in the Handle
that would only be used for this purpose.
This change uses a back-door approach, the deleter, to indicate the
"role" of a Cache entry (in addition to the value type, implicitly).
This has the added benefit of ensuring proper code origin whenever we
recognize a particular role for a cache entry; if the entry came from
some other part of the code, it will use an unrecognized deleter, which
we simply attribute to the "Misc" role.
An internal API makes for simple instantiation and automatic
registration of Cache deleters for a given value type and "role".
Another internal API, CacheEntryStatsCollector, solves the problem of
caching the results of a scan and sharing them, to ensure scans are
neither excessive nor redundant so as not to harm Cache performance.
Because code is added to BlocklikeTraits, it is pulled out of
block_based_table_reader.cc into its own file.
This is a reformulation of https://github.com/facebook/rocksdb/issues/8276, without the type checking option
(could still be added), and with actual stat gathering.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8297
Test Plan: manual testing with db_bench, and a couple of basic unit tests
Reviewed By: ltamasi
Differential Revision: D28488721
Pulled By: pdillinger
fbshipit-source-id: 472f524a9691b5afb107934be2d41d84f2b129fb
4 years ago
|
|
|
} else {
|
|
|
|
value->append("Block cache: ");
|
|
|
|
value->append(s.ToString());
|
|
|
|
value->append("\n");
|
|
|
|
}
|
make internal stats independent of statistics
Summary:
also make it aware of column family
output from db_bench
```
** Compaction Stats [default] **
Level Files Size(MB) Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s) Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt) Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 14 956 0.9 0.0 0.0 0.0 2.7 2.7 0.0 0.0 0.0 111.6 0 0 0 0 24 40 0.612 75.20 492387 0.15
L1 21 2001 2.0 5.7 2.0 3.7 5.3 1.6 5.4 2.6 71.2 65.7 31 43 55 12 82 2 41.242 43.72 41183 1.06
L2 217 18974 1.9 16.5 2.0 14.4 15.1 0.7 15.6 7.4 70.1 64.3 17 182 185 3 241 16 15.052 0.00 0 0.00
L3 1641 188245 1.8 9.1 1.1 8.0 8.5 0.5 15.4 7.4 61.3 57.2 9 75 76 1 152 9 16.887 0.00 0 0.00
L4 4447 449025 0.4 13.4 4.8 8.6 9.1 0.5 4.7 1.9 77.8 52.7 38 79 100 21 176 38 4.639 0.00 0 0.00
Sum 6340 659201 0.0 44.7 10.0 34.7 40.6 6.0 32.0 15.2 67.7 61.6 95 379 416 37 676 105 6.439 118.91 533570 0.22
Int 0 0 0.0 1.2 0.4 0.8 1.3 0.5 5.2 2.7 59.1 65.6 3 7 9 2 20 10 2.003 0.00 0 0.00
Stalls(secs): 75.197 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 43.717 leveln_slowdown
Stalls(count): 492387 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 41183 leveln_slowdown
** DB Stats **
Uptime(secs): 202.1 total, 13.5 interval
Cumulative writes: 6291456 writes, 6291456 batches, 1.0 writes per batch, 4.90 ingest GB
Cumulative WAL: 6291456 writes, 6291456 syncs, 1.00 writes per sync, 4.90 GB written
Interval writes: 1048576 writes, 1048576 batches, 1.0 writes per batch, 836.0 ingest MB
Interval WAL: 1048576 writes, 1048576 syncs, 1.00 writes per sync, 0.82 MB written
Test Plan: ran it
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D19917
10 years ago
|
|
|
}
|
|
|
|
|
|
|
|
void InternalStats::DumpCFFileHistogram(std::string* value) {
|
Introduce a blob file reader class (#7461)
Summary:
The patch adds a class called `BlobFileReader` that can be used to retrieve blobs
using the information available in blob references (e.g. blob file number, offset, and
size). This will come in handy when implementing blob support for `Get`, `MultiGet`,
and iterators, and also for compaction/garbage collection.
When a `BlobFileReader` object is created (using the factory method `Create`),
it first checks whether the specified file is potentially valid by comparing the file
size against the combined size of the blob file header and footer (files smaller than
the threshold are considered malformed). Then, it opens the file, and reads and verifies
the header and footer. The verification involves magic number/CRC checks
as well as checking for unexpected header/footer fields, e.g. incorrect column family ID
or TTL blob files.
Blobs can be retrieved using `GetBlob`. `GetBlob` validates the offset and compression
type passed by the caller (because of the presence of the header and footer, the
specified offset cannot be too close to the start/end of the file; also, the compression type
has to match the one in the blob file header), and retrieves and potentially verifies and
uncompresses the blob. In particular, when `ReadOptions::verify_checksums` is set,
`BlobFileReader` reads the blob record header as well (as opposed to just the blob itself)
and verifies the key/value size, the key itself, as well as the CRC of the blob record header
and the key/value pair.
In addition, the patch exposes the compression type from `BlobIndex` (both using an
accessor and via `DebugString`), and adds a blob file read latency histogram to
`InternalStats` that can be used with `BlobFileReader`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7461
Test Plan: `make check`
Reviewed By: riversand963
Differential Revision: D23999219
Pulled By: ltamasi
fbshipit-source-id: deb6b1160d251258b308d5156e2ec063c3e12e5e
4 years ago
|
|
|
assert(value);
|
|
|
|
assert(cfd_);
|
|
|
|
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << "\n** File Read Latency Histogram By Level [" << cfd_->GetName()
|
|
|
|
<< "] **\n";
|
|
|
|
|
|
|
|
for (int level = 0; level < number_levels_; level++) {
|
|
|
|
if (!file_read_latency_[level].Empty()) {
|
Introduce a blob file reader class (#7461)
Summary:
The patch adds a class called `BlobFileReader` that can be used to retrieve blobs
using the information available in blob references (e.g. blob file number, offset, and
size). This will come in handy when implementing blob support for `Get`, `MultiGet`,
and iterators, and also for compaction/garbage collection.
When a `BlobFileReader` object is created (using the factory method `Create`),
it first checks whether the specified file is potentially valid by comparing the file
size against the combined size of the blob file header and footer (files smaller than
the threshold are considered malformed). Then, it opens the file, and reads and verifies
the header and footer. The verification involves magic number/CRC checks
as well as checking for unexpected header/footer fields, e.g. incorrect column family ID
or TTL blob files.
Blobs can be retrieved using `GetBlob`. `GetBlob` validates the offset and compression
type passed by the caller (because of the presence of the header and footer, the
specified offset cannot be too close to the start/end of the file; also, the compression type
has to match the one in the blob file header), and retrieves and potentially verifies and
uncompresses the blob. In particular, when `ReadOptions::verify_checksums` is set,
`BlobFileReader` reads the blob record header as well (as opposed to just the blob itself)
and verifies the key/value size, the key itself, as well as the CRC of the blob record header
and the key/value pair.
In addition, the patch exposes the compression type from `BlobIndex` (both using an
accessor and via `DebugString`), and adds a blob file read latency histogram to
`InternalStats` that can be used with `BlobFileReader`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7461
Test Plan: `make check`
Reviewed By: riversand963
Differential Revision: D23999219
Pulled By: ltamasi
fbshipit-source-id: deb6b1160d251258b308d5156e2ec063c3e12e5e
4 years ago
|
|
|
oss << "** Level " << level << " read latency histogram (micros):\n"
|
|
|
|
<< file_read_latency_[level].ToString() << '\n';
|
|
|
|
}
|
|
|
|
}
|
Introduce a blob file reader class (#7461)
Summary:
The patch adds a class called `BlobFileReader` that can be used to retrieve blobs
using the information available in blob references (e.g. blob file number, offset, and
size). This will come in handy when implementing blob support for `Get`, `MultiGet`,
and iterators, and also for compaction/garbage collection.
When a `BlobFileReader` object is created (using the factory method `Create`),
it first checks whether the specified file is potentially valid by comparing the file
size against the combined size of the blob file header and footer (files smaller than
the threshold are considered malformed). Then, it opens the file, and reads and verifies
the header and footer. The verification involves magic number/CRC checks
as well as checking for unexpected header/footer fields, e.g. incorrect column family ID
or TTL blob files.
Blobs can be retrieved using `GetBlob`. `GetBlob` validates the offset and compression
type passed by the caller (because of the presence of the header and footer, the
specified offset cannot be too close to the start/end of the file; also, the compression type
has to match the one in the blob file header), and retrieves and potentially verifies and
uncompresses the blob. In particular, when `ReadOptions::verify_checksums` is set,
`BlobFileReader` reads the blob record header as well (as opposed to just the blob itself)
and verifies the key/value size, the key itself, as well as the CRC of the blob record header
and the key/value pair.
In addition, the patch exposes the compression type from `BlobIndex` (both using an
accessor and via `DebugString`), and adds a blob file read latency histogram to
`InternalStats` that can be used with `BlobFileReader`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7461
Test Plan: `make check`
Reviewed By: riversand963
Differential Revision: D23999219
Pulled By: ltamasi
fbshipit-source-id: deb6b1160d251258b308d5156e2ec063c3e12e5e
4 years ago
|
|
|
|
|
|
|
if (!blob_file_read_latency_.Empty()) {
|
|
|
|
oss << "** Blob file read latency histogram (micros):\n"
|
|
|
|
<< blob_file_read_latency_.ToString() << '\n';
|
|
|
|
}
|
|
|
|
|
|
|
|
value->append(oss.str());
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
const DBPropertyInfo* GetPropertyInfo(const Slice& /*property*/) {
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif // !ROCKSDB_LITE
|
|
|
|
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|