CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
// Copyright (c) 2014 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
//
|
|
|
|
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
#include <chrono>
|
|
|
|
#include <memory>
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
#include <string>
|
|
|
|
#include <unordered_map>
|
|
|
|
#include <vector>
|
|
|
|
|
|
|
|
#include "rocksdb/compaction_job_stats.h"
|
|
|
|
#include "rocksdb/compression_type.h"
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
#include "rocksdb/status.h"
|
|
|
|
#include "rocksdb/table_properties.h"
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
|
|
|
|
typedef std::unordered_map<std::string, std::shared_ptr<const TableProperties>>
|
|
|
|
TablePropertiesCollection;
|
|
|
|
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
class DB;
|
|
|
|
class ColumnFamilyHandle;
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
class Status;
|
|
|
|
struct CompactionJobStats;
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
enum class TableFileCreationReason {
|
|
|
|
kFlush,
|
|
|
|
kCompaction,
|
|
|
|
kRecovery,
|
Auto recovery from out of space errors (#4164)
Summary:
This commit implements automatic recovery from a Status::NoSpace() error
during background operations such as write callback, flush and
compaction. The broad design is as follows -
1. Compaction errors are treated as soft errors and don't put the
database in read-only mode. A compaction is delayed until enough free
disk space is available to accomodate the compaction outputs, which is
estimated based on the input size. This means that users can continue to
write, and we rely on the WriteController to delay or stop writes if the
compaction debt becomes too high due to persistent low disk space
condition
2. Errors during write callback and flush are treated as hard errors,
i.e the database is put in read-only mode and goes back to read-write
only fater certain recovery actions are taken.
3. Both types of recovery rely on the SstFileManagerImpl to poll for
sufficient disk space. We assume that there is a 1-1 mapping between an
SFM and the underlying OS storage container. For cases where multiple
DBs are hosted on a single storage container, the user is expected to
allocate a single SFM instance and use the same one for all the DBs. If
no SFM is specified by the user, DBImpl::Open() will allocate one, but
this will be one per DB and each DB will recover independently. The
recovery implemented by SFM is as follows -
a) On the first occurance of an out of space error during compaction,
subsequent
compactions will be delayed until the disk free space check indicates
enough available space. The required space is computed as the sum of
input sizes.
b) The free space check requirement will be removed once the amount of
free space is greater than the size reserved by in progress
compactions when the first error occured
c) If the out of space error is a hard error, a background thread in
SFM will poll for sufficient headroom before triggering the recovery
of the database and putting it in write-only mode. The headroom is
calculated as the sum of the write_buffer_size of all the DB instances
associated with the SFM
4. EventListener callbacks will be called at the start and completion of
automatic recovery. Users can disable the auto recov ery in the start
callback, and later initiate it manually by calling DB::Resume()
Todo:
1. More extensive testing
2. Add disk full condition to db_stress (follow-on PR)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4164
Differential Revision: D9846378
Pulled By: anand1976
fbshipit-source-id: 80ea875dbd7f00205e19c82215ff6e37da10da4a
7 years ago
|
|
|
kMisc,
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
};
|
|
|
|
|
|
|
|
struct TableFileCreationBriefInfo {
|
|
|
|
// the name of the database where the file was created
|
|
|
|
std::string db_name;
|
|
|
|
// the name of the column family where the file was created.
|
|
|
|
std::string cf_name;
|
|
|
|
// the path to the created file.
|
|
|
|
std::string file_path;
|
|
|
|
// the id of the job (which could be flush or compaction) that
|
|
|
|
// created the file.
|
|
|
|
int job_id;
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
// reason of creating the table.
|
|
|
|
TableFileCreationReason reason;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct TableFileCreationInfo : public TableFileCreationBriefInfo {
|
|
|
|
TableFileCreationInfo() = default;
|
|
|
|
explicit TableFileCreationInfo(TableProperties&& prop)
|
|
|
|
: table_properties(prop) {}
|
|
|
|
// the size of the file.
|
|
|
|
uint64_t file_size;
|
|
|
|
// Detailed properties of the created file.
|
|
|
|
TableProperties table_properties;
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
// The status indicating whether the creation was successful or not.
|
|
|
|
Status status;
|
|
|
|
// The checksum of the table file being created
|
|
|
|
std::string file_checksum;
|
|
|
|
// The checksum function name of checksum generator used for this table file
|
|
|
|
std::string file_checksum_func_name;
|
|
|
|
};
|
|
|
|
|
|
|
|
enum class CompactionReason : int {
|
|
|
|
kUnknown = 0,
|
|
|
|
// [Level] number of L0 files > level0_file_num_compaction_trigger
|
|
|
|
kLevelL0FilesNum,
|
|
|
|
// [Level] total size of level > MaxBytesForLevel()
|
|
|
|
kLevelMaxLevelSize,
|
|
|
|
// [Universal] Compacting for size amplification
|
|
|
|
kUniversalSizeAmplification,
|
|
|
|
// [Universal] Compacting for size ratio
|
|
|
|
kUniversalSizeRatio,
|
|
|
|
// [Universal] number of sorted runs > level0_file_num_compaction_trigger
|
|
|
|
kUniversalSortedRunNum,
|
|
|
|
// [FIFO] total size > max_table_files_size
|
|
|
|
kFIFOMaxSize,
|
|
|
|
// [FIFO] reduce number of files.
|
|
|
|
kFIFOReduceNumFiles,
|
|
|
|
// [FIFO] files with creation time < (current_time - interval)
|
|
|
|
kFIFOTtl,
|
|
|
|
// Manual compaction
|
|
|
|
kManualCompaction,
|
|
|
|
// DB::SuggestCompactRange() marked files for compaction
|
|
|
|
kFilesMarkedForCompaction,
|
|
|
|
// [Level] Automatic compaction within bottommost level to cleanup duplicate
|
|
|
|
// versions of same user key, usually due to a released snapshot.
|
|
|
|
kBottommostFiles,
|
|
|
|
// Compaction based on TTL
|
|
|
|
kTtl,
|
|
|
|
// According to the comments in flush_job.cc, RocksDB treats flush as
|
|
|
|
// a level 0 compaction in internal stats.
|
|
|
|
kFlush,
|
|
|
|
// Compaction caused by external sst file ingestion
|
|
|
|
kExternalSstIngestion,
|
Periodic Compactions (#5166)
Summary:
Introducing Periodic Compactions.
This feature allows all the files in a CF to be periodically compacted. It could help in catching any corruptions that could creep into the DB proactively as every file is constantly getting re-compacted. And also, of course, it helps to cleanup data older than certain threshold.
- Introduced a new option `periodic_compaction_time` to control how long a file can live without being compacted in a CF.
- This works across all levels.
- The files are put in the same level after going through the compaction. (Related files in the same level are picked up as `ExpandInputstoCleanCut` is used).
- Compaction filters, if any, are invoked as usual.
- A new table property, `file_creation_time`, is introduced to implement this feature. This property is set to the time at which the SST file was created (and that time is given by the underlying Env/OS).
This feature can be enabled on its own, or in conjunction with `ttl`. It is possible to set a different time threshold for the bottom level when used in conjunction with ttl. Since `ttl` works only on 0 to last but one levels, you could set `ttl` to, say, 1 day, and `periodic_compaction_time` to, say, 7 days. Since `ttl < periodic_compaction_time` all files in last but one levels keep getting picked up based on ttl, and almost never based on periodic_compaction_time. The files in the bottom level get picked up for compaction based on `periodic_compaction_time`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5166
Differential Revision: D14884441
Pulled By: sagar0
fbshipit-source-id: 408426cbacb409c06386a98632dcf90bfa1bda47
6 years ago
|
|
|
// Compaction due to SST file being too old
|
|
|
|
kPeriodicCompaction,
|
|
|
|
// total number of compaction reasons, new reasons must be added above this.
|
|
|
|
kNumOfReasons,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum class FlushReason : int {
|
FlushReason improvement
Summary:
Right now flush reason "SuperVersion Change" covers a few different scenarios which is a bit vague. For example, the following db_bench job should trigger "Write Buffer Full"
> $ TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -write_buffer_size=1048576 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304
$ grep 'flush_reason' /dev/shm/dbbench/LOG
...
2018/03/06-17:30:42.543638 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242543634, "job": 192, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018024, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.569541 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242569536, "job": 193, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.596396 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242596392, "job": 194, "event": "flush_started", "num_memtables": 1, "num_entries": 7008, "num_deletes": 0, "memory_usage": 1018048, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.622444 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242622440, "job": 195, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "SuperVersion Change"}
With the fix:
> 2018/03/19-14:40:02.341451 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602341444, "job": 98, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018008, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.379655 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602379642, "job": 100, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018016, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.418479 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602418474, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.455084 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602455079, "job": 102, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018048, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.492293 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602492288, "job": 104, "event": "flush_started", "num_memtables": 1, "num_entries": 7007, "num_deletes": 0, "memory_usage": 1018056, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.528720 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602528715, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.566255 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602566238, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018112, "flush_reason": "Write Buffer Full"}
Closes https://github.com/facebook/rocksdb/pull/3627
Differential Revision: D7328772
Pulled By: miasantreble
fbshipit-source-id: 67c94065fbdd36930f09930aad0aaa6d2c152bb8
7 years ago
|
|
|
kOthers = 0x00,
|
|
|
|
kGetLiveFiles = 0x01,
|
|
|
|
kShutDown = 0x02,
|
|
|
|
kExternalFileIngestion = 0x03,
|
|
|
|
kManualCompaction = 0x04,
|
|
|
|
kWriteBufferManager = 0x05,
|
|
|
|
kWriteBufferFull = 0x06,
|
|
|
|
kTest = 0x07,
|
FlushReason improvement
Summary:
Right now flush reason "SuperVersion Change" covers a few different scenarios which is a bit vague. For example, the following db_bench job should trigger "Write Buffer Full"
> $ TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -write_buffer_size=1048576 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304
$ grep 'flush_reason' /dev/shm/dbbench/LOG
...
2018/03/06-17:30:42.543638 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242543634, "job": 192, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018024, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.569541 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242569536, "job": 193, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.596396 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242596392, "job": 194, "event": "flush_started", "num_memtables": 1, "num_entries": 7008, "num_deletes": 0, "memory_usage": 1018048, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.622444 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242622440, "job": 195, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "SuperVersion Change"}
With the fix:
> 2018/03/19-14:40:02.341451 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602341444, "job": 98, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018008, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.379655 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602379642, "job": 100, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018016, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.418479 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602418474, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.455084 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602455079, "job": 102, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018048, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.492293 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602492288, "job": 104, "event": "flush_started", "num_memtables": 1, "num_entries": 7007, "num_deletes": 0, "memory_usage": 1018056, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.528720 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602528715, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.566255 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602566238, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018112, "flush_reason": "Write Buffer Full"}
Closes https://github.com/facebook/rocksdb/pull/3627
Differential Revision: D7328772
Pulled By: miasantreble
fbshipit-source-id: 67c94065fbdd36930f09930aad0aaa6d2c152bb8
7 years ago
|
|
|
kDeleteFiles = 0x08,
|
|
|
|
kAutoCompaction = 0x09,
|
|
|
|
kManualFlush = 0x0a,
|
Auto recovery from out of space errors (#4164)
Summary:
This commit implements automatic recovery from a Status::NoSpace() error
during background operations such as write callback, flush and
compaction. The broad design is as follows -
1. Compaction errors are treated as soft errors and don't put the
database in read-only mode. A compaction is delayed until enough free
disk space is available to accomodate the compaction outputs, which is
estimated based on the input size. This means that users can continue to
write, and we rely on the WriteController to delay or stop writes if the
compaction debt becomes too high due to persistent low disk space
condition
2. Errors during write callback and flush are treated as hard errors,
i.e the database is put in read-only mode and goes back to read-write
only fater certain recovery actions are taken.
3. Both types of recovery rely on the SstFileManagerImpl to poll for
sufficient disk space. We assume that there is a 1-1 mapping between an
SFM and the underlying OS storage container. For cases where multiple
DBs are hosted on a single storage container, the user is expected to
allocate a single SFM instance and use the same one for all the DBs. If
no SFM is specified by the user, DBImpl::Open() will allocate one, but
this will be one per DB and each DB will recover independently. The
recovery implemented by SFM is as follows -
a) On the first occurance of an out of space error during compaction,
subsequent
compactions will be delayed until the disk free space check indicates
enough available space. The required space is computed as the sum of
input sizes.
b) The free space check requirement will be removed once the amount of
free space is greater than the size reserved by in progress
compactions when the first error occured
c) If the out of space error is a hard error, a background thread in
SFM will poll for sufficient headroom before triggering the recovery
of the database and putting it in write-only mode. The headroom is
calculated as the sum of the write_buffer_size of all the DB instances
associated with the SFM
4. EventListener callbacks will be called at the start and completion of
automatic recovery. Users can disable the auto recov ery in the start
callback, and later initiate it manually by calling DB::Resume()
Todo:
1. More extensive testing
2. Add disk full condition to db_stress (follow-on PR)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4164
Differential Revision: D9846378
Pulled By: anand1976
fbshipit-source-id: 80ea875dbd7f00205e19c82215ff6e37da10da4a
7 years ago
|
|
|
kErrorRecovery = 0xb,
|
|
|
|
// When set the flush reason to kErrorRecoveryRetryFlush, SwitchMemtable
|
|
|
|
// will not be called to avoid many small immutable memtables.
|
|
|
|
kErrorRecoveryRetryFlush = 0xc,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum class BackgroundErrorReason {
|
|
|
|
kFlush,
|
|
|
|
kCompaction,
|
|
|
|
kWriteCallback,
|
|
|
|
kMemTable,
|
First step towards handling MANIFEST write error (#6949)
Summary:
This PR provides preliminary support for handling IO error during MANIFEST write.
File write/sync is not guaranteed to be atomic. If we encounter an IOError while writing/syncing to the MANIFEST file, we cannot be sure about the state of the MANIFEST file. The version edits may or may not have reached the file. During cleanup, if we delete the newly-generated SST files referenced by the pending version edit(s), but the version edit(s) actually are persistent in the MANIFEST, then next recovery attempt will process the version edits(s) and then fail since the SST files have already been deleted.
One approach is to truncate the MANIFEST after write/sync error, so that it is safe to delete the SST files. However, file truncation may not be supported on certain file systems. Therefore, we take the following approach.
If an IOError is detected during MANIFEST write/sync, we disable file deletions for the faulty database. Depending on whether the IOError is retryable (set by underlying file system), either RocksDB or application can call `DB::Resume()`, or simply shutdown and restart. During `Resume()`, RocksDB will try to switch to a new MANIFEST and write all existing in-memory version storage in the new file. If this succeeds, then RocksDB may proceed. If all recovery is completed, then file deletions will be re-enabled.
Note that multiple threads can call `LogAndApply()` at the same time, though only one of them will be going through the process MANIFEST write, possibly batching the version edits of other threads. When the leading MANIFEST writer finishes, all of the MANIFEST writing threads in this batch will have the same IOError. They will all call `ErrorHandler::SetBGError()` in which file deletion will be disabled.
Possible future directions:
- Add an `ErrorContext` structure so that it is easier to pass more info to `ErrorHandler`. Currently, as in this example, a new `BackgroundErrorReason` has to be added.
Test plan (dev server):
make check
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6949
Reviewed By: anand1976
Differential Revision: D22026020
Pulled By: riversand963
fbshipit-source-id: f3c68a2ef45d9b505d0d625c7c5e0c88495b91c8
5 years ago
|
|
|
kManifestWrite,
|
|
|
|
kFlushNoWAL,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum class WriteStallCondition {
|
|
|
|
kNormal,
|
|
|
|
kDelayed,
|
|
|
|
kStopped,
|
|
|
|
};
|
|
|
|
|
|
|
|
struct WriteStallInfo {
|
|
|
|
// the name of the column family
|
|
|
|
std::string cf_name;
|
|
|
|
// state of the write controller
|
|
|
|
struct {
|
|
|
|
WriteStallCondition cur;
|
|
|
|
WriteStallCondition prev;
|
|
|
|
} condition;
|
|
|
|
};
|
|
|
|
|
|
|
|
#ifndef ROCKSDB_LITE
|
|
|
|
|
|
|
|
struct TableFileDeletionInfo {
|
|
|
|
// The name of the database where the file was deleted.
|
|
|
|
std::string db_name;
|
|
|
|
// The path to the deleted file.
|
|
|
|
std::string file_path;
|
|
|
|
// The id of the job which deleted the file.
|
|
|
|
int job_id;
|
|
|
|
// The status indicating whether the deletion was successful or not.
|
|
|
|
Status status;
|
|
|
|
};
|
|
|
|
|
|
|
|
enum class FileOperationType {
|
|
|
|
kRead,
|
|
|
|
kWrite,
|
|
|
|
kTruncate,
|
|
|
|
kClose,
|
|
|
|
kFlush,
|
|
|
|
kSync,
|
|
|
|
kFsync,
|
|
|
|
kRangeSync
|
|
|
|
};
|
|
|
|
|
|
|
|
struct FileOperationInfo {
|
|
|
|
using Duration = std::chrono::nanoseconds;
|
|
|
|
using SteadyTimePoint =
|
|
|
|
std::chrono::time_point<std::chrono::steady_clock, Duration>;
|
|
|
|
using SystemTimePoint =
|
|
|
|
std::chrono::time_point<std::chrono::system_clock, Duration>;
|
|
|
|
using StartTimePoint = std::pair<SystemTimePoint, SteadyTimePoint>;
|
|
|
|
using FinishTimePoint = SteadyTimePoint;
|
|
|
|
|
|
|
|
FileOperationType type;
|
|
|
|
const std::string& path;
|
|
|
|
uint64_t offset;
|
|
|
|
size_t length;
|
|
|
|
const Duration duration;
|
|
|
|
const SystemTimePoint& start_ts;
|
|
|
|
Status status;
|
|
|
|
FileOperationInfo(const FileOperationType _type, const std::string& _path,
|
|
|
|
const StartTimePoint& _start_ts,
|
|
|
|
const FinishTimePoint& _finish_ts, const Status& _status)
|
|
|
|
: type(_type),
|
|
|
|
path(_path),
|
|
|
|
duration(std::chrono::duration_cast<std::chrono::nanoseconds>(
|
|
|
|
_finish_ts - _start_ts.second)),
|
|
|
|
start_ts(_start_ts.first),
|
|
|
|
status(_status) {}
|
|
|
|
static StartTimePoint StartNow() {
|
|
|
|
return std::make_pair<SystemTimePoint, SteadyTimePoint>(
|
|
|
|
std::chrono::system_clock::now(), std::chrono::steady_clock::now());
|
|
|
|
}
|
|
|
|
static FinishTimePoint FinishNow() {
|
|
|
|
return std::chrono::steady_clock::now();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
struct FlushJobInfo {
|
|
|
|
// the id of the column family
|
|
|
|
uint32_t cf_id;
|
|
|
|
// the name of the column family
|
|
|
|
std::string cf_name;
|
|
|
|
// the path to the newly created file
|
|
|
|
std::string file_path;
|
|
|
|
// the file number of the newly created file
|
|
|
|
uint64_t file_number;
|
|
|
|
// the oldest blob file referenced by the newly created file
|
|
|
|
uint64_t oldest_blob_file_number;
|
|
|
|
// the id of the thread that completed this flush job.
|
|
|
|
uint64_t thread_id;
|
|
|
|
// the job id, which is unique in the same thread.
|
|
|
|
int job_id;
|
|
|
|
// If true, then rocksdb is currently slowing-down all writes to prevent
|
|
|
|
// creating too many Level 0 files as compaction seems not able to
|
|
|
|
// catch up the write request speed. This indicates that there are
|
|
|
|
// too many files in Level 0.
|
|
|
|
bool triggered_writes_slowdown;
|
|
|
|
// If true, then rocksdb is currently blocking any writes to prevent
|
|
|
|
// creating more L0 files. This indicates that there are too many
|
|
|
|
// files in level 0. Compactions should try to compact L0 files down
|
|
|
|
// to lower levels as soon as possible.
|
|
|
|
bool triggered_writes_stop;
|
|
|
|
// The smallest sequence number in the newly created file
|
|
|
|
SequenceNumber smallest_seqno;
|
|
|
|
// The largest sequence number in the newly created file
|
|
|
|
SequenceNumber largest_seqno;
|
|
|
|
// Table properties of the table being flushed
|
|
|
|
TableProperties table_properties;
|
|
|
|
|
|
|
|
FlushReason flush_reason;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct CompactionFileInfo {
|
|
|
|
// The level of the file.
|
|
|
|
int level;
|
|
|
|
|
|
|
|
// The file number of the file.
|
|
|
|
uint64_t file_number;
|
|
|
|
|
|
|
|
// The file number of the oldest blob file this SST file references.
|
|
|
|
uint64_t oldest_blob_file_number;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct CompactionJobInfo {
|
|
|
|
~CompactionJobInfo() { status.PermitUncheckedError(); }
|
|
|
|
// the id of the column family where the compaction happened.
|
|
|
|
uint32_t cf_id;
|
|
|
|
// the name of the column family where the compaction happened.
|
|
|
|
std::string cf_name;
|
|
|
|
// the status indicating whether the compaction was successful or not.
|
|
|
|
Status status;
|
|
|
|
// the id of the thread that completed this compaction job.
|
|
|
|
uint64_t thread_id;
|
|
|
|
// the job id, which is unique in the same thread.
|
|
|
|
int job_id;
|
|
|
|
// the smallest input level of the compaction.
|
|
|
|
int base_input_level;
|
|
|
|
// the output level of the compaction.
|
|
|
|
int output_level;
|
|
|
|
|
|
|
|
// The following variables contain information about compaction inputs
|
|
|
|
// and outputs. A file may appear in both the input and output lists
|
|
|
|
// if it was simply moved to a different level. The order of elements
|
|
|
|
// is the same across input_files and input_file_infos; similarly, it is
|
|
|
|
// the same across output_files and output_file_infos.
|
|
|
|
|
|
|
|
// The names of the compaction input files.
|
|
|
|
std::vector<std::string> input_files;
|
|
|
|
|
|
|
|
// Additional information about the compaction input files.
|
|
|
|
std::vector<CompactionFileInfo> input_file_infos;
|
|
|
|
|
|
|
|
// The names of the compaction output files.
|
|
|
|
std::vector<std::string> output_files;
|
|
|
|
|
|
|
|
// Additional information about the compaction output files.
|
|
|
|
std::vector<CompactionFileInfo> output_file_infos;
|
|
|
|
|
|
|
|
// Table properties for input and output tables.
|
|
|
|
// The map is keyed by values from input_files and output_files.
|
|
|
|
TablePropertiesCollection table_properties;
|
|
|
|
|
|
|
|
// Reason to run the compaction
|
|
|
|
CompactionReason compaction_reason;
|
|
|
|
|
|
|
|
// Compression algorithm used for output files
|
|
|
|
CompressionType compression;
|
|
|
|
|
|
|
|
// Statistics and other additional details on the compaction
|
|
|
|
CompactionJobStats stats;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct MemTableInfo {
|
|
|
|
// the name of the column family to which memtable belongs
|
|
|
|
std::string cf_name;
|
|
|
|
// Sequence number of the first element that was inserted
|
|
|
|
// into the memtable.
|
|
|
|
SequenceNumber first_seqno;
|
|
|
|
// Sequence number that is guaranteed to be smaller than or equal
|
|
|
|
// to the sequence number of any key that could be inserted into this
|
|
|
|
// memtable. It can then be assumed that any write with a larger(or equal)
|
|
|
|
// sequence number will be present in this memtable or a later memtable.
|
|
|
|
SequenceNumber earliest_seqno;
|
|
|
|
// Total number of entries in memtable
|
|
|
|
uint64_t num_entries;
|
|
|
|
// Total number of deletes in memtable
|
|
|
|
uint64_t num_deletes;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ExternalFileIngestionInfo {
|
|
|
|
// the name of the column family
|
|
|
|
std::string cf_name;
|
|
|
|
// Path of the file outside the DB
|
|
|
|
std::string external_file_path;
|
|
|
|
// Path of the file inside the DB
|
|
|
|
std::string internal_file_path;
|
|
|
|
// The global sequence number assigned to keys in this file
|
|
|
|
SequenceNumber global_seqno;
|
|
|
|
// Table properties of the table being flushed
|
|
|
|
TableProperties table_properties;
|
|
|
|
};
|
|
|
|
|
|
|
|
// EventListener class contains a set of callback functions that will
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
// be called when specific RocksDB event happens such as flush. It can
|
|
|
|
// be used as a building block for developing custom features such as
|
|
|
|
// stats-collector or external compaction algorithm.
|
|
|
|
//
|
|
|
|
// Note that callback functions should not run for an extended period of
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
// time before the function returns, otherwise RocksDB may be blocked.
|
|
|
|
// For example, it is not suggested to do DB::CompactFiles() (as it may
|
|
|
|
// run for a long while) or issue many of DB::Put() (as Put may be blocked
|
|
|
|
// in certain cases) in the same thread in the EventListener callback.
|
|
|
|
// However, doing DB::CompactFiles() and DB::Put() in another thread is
|
|
|
|
// considered safe.
|
|
|
|
//
|
|
|
|
// [Threading] All EventListener callback will be called using the
|
|
|
|
// actual thread that involves in that specific event. For example, it
|
|
|
|
// is the RocksDB background flush thread that does the actual flush to
|
|
|
|
// call EventListener::OnFlushCompleted().
|
|
|
|
//
|
|
|
|
// [Locking] All EventListener callbacks are designed to be called without
|
|
|
|
// the current thread holding any DB mutex. This is to prevent potential
|
|
|
|
// deadlock and performance issue when using EventListener callback
|
|
|
|
// in a complex way.
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
class EventListener {
|
|
|
|
public:
|
|
|
|
// A callback function to RocksDB which will be called whenever a
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
// registered RocksDB flushes a file. The default implementation is
|
|
|
|
// no-op.
|
|
|
|
//
|
|
|
|
// Note that the this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
virtual void OnFlushCompleted(DB* /*db*/,
|
|
|
|
const FlushJobInfo& /*flush_job_info*/) {}
|
|
|
|
|
|
|
|
// A callback function to RocksDB which will be called before a
|
|
|
|
// RocksDB starts to flush memtables. The default implementation is
|
|
|
|
// no-op.
|
|
|
|
//
|
|
|
|
// Note that the this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
virtual void OnFlushBegin(DB* /*db*/,
|
|
|
|
const FlushJobInfo& /*flush_job_info*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever
|
|
|
|
// a SST file is deleted. Different from OnCompactionCompleted and
|
|
|
|
// OnFlushCompleted, this callback is designed for external logging
|
|
|
|
// service and thus only provide string parameters instead
|
|
|
|
// of a pointer to DB. Applications that build logic basic based
|
|
|
|
// on file creations and deletions is suggested to implement
|
|
|
|
// OnFlushCompleted and OnCompactionCompleted.
|
|
|
|
//
|
|
|
|
// Note that if applications would like to use the passed reference
|
|
|
|
// outside this function call, they should make copies from the
|
|
|
|
// returned value.
|
|
|
|
virtual void OnTableFileDeleted(const TableFileDeletionInfo& /*info*/) {}
|
|
|
|
|
|
|
|
// A callback function to RocksDB which will be called before a
|
|
|
|
// RocksDB starts to compact. The default implementation is
|
|
|
|
// no-op.
|
|
|
|
//
|
|
|
|
// Note that the this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
virtual void OnCompactionBegin(DB* /*db*/, const CompactionJobInfo& /*ci*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever
|
|
|
|
// a registered RocksDB compacts a file. The default implementation
|
|
|
|
// is a no-op.
|
|
|
|
//
|
|
|
|
// Note that this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
//
|
|
|
|
// @param db a pointer to the rocksdb instance which just compacted
|
|
|
|
// a file.
|
|
|
|
// @param ci a reference to a CompactionJobInfo struct. 'ci' is released
|
|
|
|
// after this function is returned, and must be copied if it is needed
|
|
|
|
// outside of this function.
|
|
|
|
virtual void OnCompactionCompleted(DB* /*db*/,
|
|
|
|
const CompactionJobInfo& /*ci*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever
|
|
|
|
// a SST file is created. Different from OnCompactionCompleted and
|
|
|
|
// OnFlushCompleted, this callback is designed for external logging
|
|
|
|
// service and thus only provide string parameters instead
|
|
|
|
// of a pointer to DB. Applications that build logic basic based
|
|
|
|
// on file creations and deletions is suggested to implement
|
|
|
|
// OnFlushCompleted and OnCompactionCompleted.
|
|
|
|
//
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
// Historically it will only be called if the file is successfully created.
|
|
|
|
// Now it will also be called on failure case. User can check info.status
|
|
|
|
// to see if it succeeded or not.
|
|
|
|
//
|
|
|
|
// Note that if applications would like to use the passed reference
|
|
|
|
// outside this function call, they should make copies from these
|
|
|
|
// returned value.
|
|
|
|
virtual void OnTableFileCreated(const TableFileCreationInfo& /*info*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called before
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
// a SST file is being created. It will follow by OnTableFileCreated after
|
|
|
|
// the creation finishes.
|
|
|
|
//
|
|
|
|
// Note that if applications would like to use the passed reference
|
|
|
|
// outside this function call, they should make copies from these
|
|
|
|
// returned value.
|
|
|
|
virtual void OnTableFileCreationStarted(
|
|
|
|
const TableFileCreationBriefInfo& /*info*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called before
|
|
|
|
// a memtable is made immutable.
|
|
|
|
//
|
|
|
|
// Note that the this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
//
|
|
|
|
// Note that if applications would like to use the passed reference
|
|
|
|
// outside this function call, they should make copies from these
|
|
|
|
// returned value.
|
|
|
|
virtual void OnMemTableSealed(const MemTableInfo& /*info*/) {}
|
Added EventListener::OnTableFileCreationStarted() callback
Summary: Added EventListener::OnTableFileCreationStarted. EventListener::OnTableFileCreated will be called on failure case. User can check creation status via TableFileCreationInfo::status.
Test Plan: unit test.
Reviewers: dhruba, yhchiang, ott, sdong
Reviewed By: sdong
Subscribers: sdong, kradhakrishnan, IslamAbdelRahman, andrewkr, yhchiang, leveldb, ott, dhruba
Differential Revision: https://reviews.facebook.net/D56337
9 years ago
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called before
|
|
|
|
// a column family handle is deleted.
|
|
|
|
//
|
|
|
|
// Note that the this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
// @param handle is a pointer to the column family handle to be deleted
|
|
|
|
// which will become a dangling pointer after the deletion.
|
|
|
|
virtual void OnColumnFamilyHandleDeletionStarted(
|
|
|
|
ColumnFamilyHandle* /*handle*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called after an external
|
|
|
|
// file is ingested using IngestExternalFile.
|
|
|
|
//
|
|
|
|
// Note that the this function will run on the same thread as
|
|
|
|
// IngestExternalFile(), if this function is blocked, IngestExternalFile()
|
|
|
|
// will be blocked from finishing.
|
|
|
|
virtual void OnExternalFileIngested(
|
|
|
|
DB* /*db*/, const ExternalFileIngestionInfo& /*info*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called before setting the
|
|
|
|
// background error status to a non-OK value. The new background error status
|
|
|
|
// is provided in `bg_error` and can be modified by the callback. E.g., a
|
|
|
|
// callback can suppress errors by resetting it to Status::OK(), thus
|
|
|
|
// preventing the database from entering read-only mode. We do not provide any
|
|
|
|
// guarantee when failed flushes/compactions will be rescheduled if the user
|
|
|
|
// suppresses an error.
|
|
|
|
//
|
|
|
|
// Note that this function can run on the same threads as flush, compaction,
|
|
|
|
// and user writes. So, it is extremely important not to perform heavy
|
|
|
|
// computations or blocking calls in this function.
|
|
|
|
virtual void OnBackgroundError(BackgroundErrorReason /* reason */,
|
|
|
|
Status* /* bg_error */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a change
|
|
|
|
// of superversion triggers a change of the stall conditions.
|
|
|
|
//
|
|
|
|
// Note that the this function must be implemented in a way such that
|
|
|
|
// it should not run for an extended period of time before the function
|
|
|
|
// returns. Otherwise, RocksDB may be blocked.
|
|
|
|
virtual void OnStallConditionsChanged(const WriteStallInfo& /*info*/) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file read
|
|
|
|
// operation finishes.
|
|
|
|
virtual void OnFileReadFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file write
|
|
|
|
// operation finishes.
|
|
|
|
virtual void OnFileWriteFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file flush
|
|
|
|
// operation finishes.
|
|
|
|
virtual void OnFileFlushFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file sync
|
|
|
|
// operation finishes.
|
|
|
|
virtual void OnFileSyncFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file
|
|
|
|
// rangeSync operation finishes.
|
|
|
|
virtual void OnFileRangeSyncFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file
|
|
|
|
// truncate operation finishes.
|
|
|
|
virtual void OnFileTruncateFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called whenever a file close
|
|
|
|
// operation finishes.
|
|
|
|
virtual void OnFileCloseFinish(const FileOperationInfo& /* info */) {}
|
|
|
|
|
|
|
|
// If true, the OnFile*Finish functions will be called. If
|
|
|
|
// false, then they won't be called.
|
|
|
|
virtual bool ShouldBeNotifiedOnFileIO() { return false; }
|
|
|
|
|
Auto recovery from out of space errors (#4164)
Summary:
This commit implements automatic recovery from a Status::NoSpace() error
during background operations such as write callback, flush and
compaction. The broad design is as follows -
1. Compaction errors are treated as soft errors and don't put the
database in read-only mode. A compaction is delayed until enough free
disk space is available to accomodate the compaction outputs, which is
estimated based on the input size. This means that users can continue to
write, and we rely on the WriteController to delay or stop writes if the
compaction debt becomes too high due to persistent low disk space
condition
2. Errors during write callback and flush are treated as hard errors,
i.e the database is put in read-only mode and goes back to read-write
only fater certain recovery actions are taken.
3. Both types of recovery rely on the SstFileManagerImpl to poll for
sufficient disk space. We assume that there is a 1-1 mapping between an
SFM and the underlying OS storage container. For cases where multiple
DBs are hosted on a single storage container, the user is expected to
allocate a single SFM instance and use the same one for all the DBs. If
no SFM is specified by the user, DBImpl::Open() will allocate one, but
this will be one per DB and each DB will recover independently. The
recovery implemented by SFM is as follows -
a) On the first occurance of an out of space error during compaction,
subsequent
compactions will be delayed until the disk free space check indicates
enough available space. The required space is computed as the sum of
input sizes.
b) The free space check requirement will be removed once the amount of
free space is greater than the size reserved by in progress
compactions when the first error occured
c) If the out of space error is a hard error, a background thread in
SFM will poll for sufficient headroom before triggering the recovery
of the database and putting it in write-only mode. The headroom is
calculated as the sum of the write_buffer_size of all the DB instances
associated with the SFM
4. EventListener callbacks will be called at the start and completion of
automatic recovery. Users can disable the auto recov ery in the start
callback, and later initiate it manually by calling DB::Resume()
Todo:
1. More extensive testing
2. Add disk full condition to db_stress (follow-on PR)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4164
Differential Revision: D9846378
Pulled By: anand1976
fbshipit-source-id: 80ea875dbd7f00205e19c82215ff6e37da10da4a
7 years ago
|
|
|
// A callback function for RocksDB which will be called just before
|
|
|
|
// starting the automatic recovery process for recoverable background
|
|
|
|
// errors, such as NoSpace(). The callback can suppress the automatic
|
|
|
|
// recovery by setting *auto_recovery to false. The database will then
|
|
|
|
// have to be transitioned out of read-only mode by calling DB::Resume()
|
|
|
|
virtual void OnErrorRecoveryBegin(BackgroundErrorReason /* reason */,
|
|
|
|
Status /* bg_error */,
|
|
|
|
bool* /* auto_recovery */) {}
|
|
|
|
|
|
|
|
// A callback function for RocksDB which will be called once the database
|
|
|
|
// is recovered from read-only mode after an error. When this is called, it
|
|
|
|
// means normal writes to the database can be issued and the user can
|
|
|
|
// initiate any further recovery actions needed
|
|
|
|
virtual void OnErrorRecoveryCompleted(Status /* old_bg_error */) {}
|
|
|
|
|
|
|
|
virtual ~EventListener() {}
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
};
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
class EventListener {};
|
|
|
|
struct FlushJobInfo {};
|
|
|
|
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
11 years ago
|
|
|
#endif // ROCKSDB_LITE
|
|
|
|
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|