Truncate posts on the main /blog/ page

main
Joel Marcey 8 years ago
parent 0d7acadaf9
commit b90e29c901
  1. 2
      docs/_posts/2014-03-27-how-to-backup-rocksdb.markdown
  2. 2
      docs/_posts/2014-03-27-how-to-persist-in-memory-rocksdb-database.markdown
  3. 2
      docs/_posts/2014-04-02-the-1st-rocksdb-local-meetup-held-on-march-27-2014.markdown
  4. 5
      docs/_posts/2014-04-07-rocksdb-2-8-release.markdown
  5. 2
      docs/_posts/2014-04-21-indexing-sst-files-for-better-lookup-performance.markdown
  6. 3
      docs/_posts/2014-05-14-lock.markdown
  7. 5
      docs/_posts/2014-05-19-rocksdb-3-0-release.markdown
  8. 4
      docs/_posts/2014-05-22-rocksdb-3-1-release.markdown
  9. 2
      docs/_posts/2014-06-23-plaintable-a-new-file-format.markdown
  10. 13
      docs/_posts/2014-06-27-avoid-expensive-locks-in-get.markdown
  11. 4
      docs/_posts/2014-06-27-rocksdb-3-2-release.markdown
  12. 4
      docs/_posts/2014-07-29-rocksdb-3-3-release.markdown
  13. 64
      docs/_posts/2014-09-12-cuckoo.markdown
  14. 57
      docs/_posts/2014-09-12-new-bloom-filter-format.markdown
  15. 14
      docs/_posts/2014-09-15-rocksdb-3-5-release.markdown
  16. 2
      docs/_posts/2015-01-16-migrating-from-leveldb-to-rocksdb-2.markdown
  17. 2
      docs/_posts/2015-02-24-reading-rocksdb-options-from-a-file.markdown
  18. 2
      docs/_posts/2015-02-27-write-batch-with-index.markdown
  19. 2
      docs/_posts/2015-04-22-integrating-rocksdb-with-mongodb-2.markdown
  20. 2
      docs/_posts/2015-07-15-rocksdb-2015-h2-roadmap.markdown
  21. 2
      docs/_posts/2015-07-17-spatial-indexing-in-rocksdb.markdown
  22. 2
      docs/_posts/2015-07-22-rocksdb-is-now-available-in-windows-platform.markdown
  23. 11
      docs/_posts/2015-07-23-dynamic-level.markdown
  24. 2
      docs/_posts/2015-10-27-getthreadlist.markdown
  25. 2
      docs/_posts/2015-11-10-use-checkpoints-for-efficient-snapshots.markdown
  26. 2
      docs/_posts/2015-11-16-analysis-file-read-latency-by-level.markdown
  27. 2
      docs/_posts/2016-01-29-compaction_pri.markdown
  28. 6
      docs/_posts/2016-03-07-rocksdb-options-file.markdown
  29. 2
      docs/_posts/2016-04-26-rocksdb-4-5-1-released.markdown
  30. 2
      docs/_posts/2016-07-26-rocksdb-4-8-released.markdown
  31. 2
      docs/_posts/2016-09-28-rocksdb-4-11-2-released.markdown

@ -22,7 +22,7 @@ In RocksDB, we have implemented an easy way to backup your DB. Here is a simple
backupable_db->CreateNewBackup();
delete backupable_db; // no need to also delete db
<!--truncate-->
This simple example will create a backup of your DB in "/tmp/rocksdb_backup". Creating new BackupableDB consumes DB* and you should be calling all the DB methods on object `backupable_db` going forward.

@ -9,6 +9,8 @@ redirect_from:
In recent months, we have focused on optimizing RocksDB for in-memory workloads. With growing RAM sizes and strict low-latency requirements, lots of applications decide to keep their entire data in memory. Running in-memory database with RocksDB is easy -- just mount your RocksDB directory on tmpfs or ramfs [1]. Even if the process crashes, RocksDB can recover all of your data from in-memory filesystem. However, what happens if the machine reboots?
<!--truncate-->
In this article we will explain how you can recover your in-memory RocksDB database even after a machine reboot.
Every update to RocksDB is written to two places - one is an in-memory data structure called memtable and second is write-ahead log. Write-ahead log can be used to completely recover the data in memtable. By default, when we flush the memtable to table file, we also delete the current log, since we don't need it anymore for recovery (the data from the log is "persisted" in the table file -- we say that the log file is obsolete). However, if your table file is stored in in-memory file system, you may need the obsolete write-ahead log to recover the data after the machine reboots. Here's how you can do that.

@ -9,6 +9,8 @@ redirect_from:
On Mar 27, 2014, RocksDB team @ Facebook held the 1st RocksDB local meetup in FB HQ (Menlo Park, California). We invited around 80 guests from 20+ local companies, including LinkedIn, Twitter, Dropbox, Square, Pinterest, MapR, Microsoft and IBM. Finally around 50 guests showed up, totaling around 60% show-up rate.
<!--truncate-->
[![Resize of 20140327_200754](/static/images/Resize-of-20140327_200754-300x225.jpg)](/static/images/Resize-of-20140327_200754-300x225.jpg)
RocksDB team @ Facebook gave four talks about the latest progress and experience on RocksDB:

@ -9,10 +9,11 @@ redirect_from:
Check out the new RocksDB 2.8 release on [Github](https://github.com/facebook/rocksdb/releases/tag/2.8.fb).
RocksDB 2.8. is mostly focused on improving performance for in-memory workloads. We are seeing read QPS as high as 5M (we will write a separate blog post on this). Here is the summary of new features:
RocksDB 2.8. is mostly focused on improving performance for in-memory workloads. We are seeing read QPS as high as 5M (we will write a separate blog post on this).
<!--truncate-->
Here is the summary of new features:
* Added a new table format called PlainTable, which is optimized for RAM storage (ramfs or tmpfs). You can read more details about it on [our wiki](https://github.com/facebook/rocksdb/wiki/PlainTable-Format).

@ -11,6 +11,8 @@ For a `Get()` request, RocksDB goes through mutable memtable, list of immutable
On level 0, files are sorted based on the time they are flushed. Their key range (as defined by FileMetaData.smallest and FileMetaData.largest) are mostly overlapped with each other. So it needs to look up every L0 file.
<!--truncate-->
Compaction is scheduled periodically to pick up files from an upper level and merges them with files from lower level. As a result, key/values are moved from L0 down the LSM tree gradually. Compaction sorts key/values and split them into files. From level 1 and below, SST files are sorted based on key. Their key range are mutually exclusive. Instead of scanning through each SST file and checking if a key falls into its range, RocksDB performs a binary search based on FileMetaData.largest to locate a candidate file that can potentially contain the target key. This reduces complexity from O(N) to O(log(N)). However, log(N) can still be large for bottom levels. For a fan-out ratio of 10, level 3 can have 1000 files. That requires 10 comparisons to locate a candidate file. This is a significant cost for an in-memory database when you can do [several million gets per second](https://github.com/facebook/rocksdb/wiki/RocksDB-In-Memory-Workload-Performance-Benchmarks).
One observation to this problem is that: after the LSM tree is built, an SST file's position in its level is fixed. Furthermore, its order relative to files from the next level is also fixed. Based on this idea, we can perform [fractional cascading](http://en.wikipedia.org/wiki/Fractional_cascading) kind of optimization to narrow down the binary search range. Here is an example:

@ -11,8 +11,7 @@ In this post, we briefly introduce the recent improvements we did to RocksDB to
RocksDB has a simple thread synchronization mechanism (See [RocksDB Architecture Guide](https://github.com/facebook/rocksdb/wiki/Rocksdb-Architecture-Guide)  to understand terms used below, like SST tables or mem tables). SST tables are immutable after being written and mem tables are lock-free data structures supporting single writer and multiple readers. There is only one single major lock, the DB mutex (DBImpl.mutex_) protecting all the meta operations, including:
<!--truncate-->
* Increase or decrease reference counters of mem tables and SST tables

@ -11,9 +11,6 @@ Check out new RocksDB release on [Github](https://github.com/facebook/rocksdb/re
New features in RocksDB 3.0:
* [Column Family support](https://github.com/facebook/rocksdb/wiki/Column-Families)
@ -22,6 +19,6 @@ New features in RocksDB 3.0:
* Deprecated ReadOptions::prefix_seek and ReadOptions::prefix
<!--truncate-->
Check out the full [change log](https://github.com/facebook/rocksdb/blob/3.0.fb/HISTORY.md).

@ -11,14 +11,10 @@ Check out the new release on [Github](https://github.com/facebook/rocksdb/releas
New features in RocksDB 3.1:
* [Materialized hash index](https://github.com/facebook/rocksdb/commit/0b3d03d026a7248e438341264b4c6df339edc1d7)
* [FIFO compaction style](https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style)
We released 3.1 so fast after 3.0 because one of our internal customers needed materialized hash index.

@ -17,6 +17,8 @@ Design goals:
1. Minimize memory consumption.
1. Queries efficiently return empty results
<!--truncate-->
Notice that our priority was not to maximize query performance, but to strike a balance between query performance and memory consumption. PlainTable query performance is not as good as you would see with a nicely-designed hash table, but they are of the same order of magnitude, while keeping memory overhead to a minimum.
Since we are targeting micro-second latency, it is on the level of the number of CPU cache misses (if they cannot be parallellized, which are usually the case for index look-ups). On our target hardware with Intel CPUs of multiple sockets with NUMA, we can only allow 4-5 CPU cache misses (including costs of data TLB).

@ -9,32 +9,21 @@ redirect_from:
As promised in the previous [blog post](blog/2014/05/14/lock.html)!
RocksDB employs a multiversion concurrency control strategy. Before reading data, it needs to grab the current version, which is encapsulated in a data structure called [SuperVersion](https://reviews.facebook.net/rROCKSDB1fdb3f7dc60e96394e3e5b69a46ede5d67fb976c).
<!--truncate-->
At the beginning of `GetImpl()`, it used to do this:
<span class="zw-portion">mutex_.Lock();
</span>auto* s = super_version_->Ref();
mutex_.Unlock();
The lock is necessary because pointer super_version_ may be updated, the corresponding SuperVersion may be deleted while Ref() is in progress.
`Ref()` simply increases the reference counter and returns “this” pointer. However, this simple operation posed big challenges for in-memory workload and stopped RocksDB from scaling read throughput beyond 8 cores. Running 32 read threads on a 32-core CPU leads to [70% system CPU usage](https://github.com/facebook/rocksdb/raw/gh-pages/talks/2014-03-27-RocksDB-Meetup-Lei-Lockless-Get.pdf). This is outrageous!

@ -11,14 +11,12 @@ Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/r
New Features in RocksDB 3.2:
* PlainTable now supports a new key encoding: for keys of the same prefix, the prefix is only written once. It can be enabled through encoding_type paramter of NewPlainTableFactory()
* Add AdaptiveTableFactory, which is used to convert from a DB of PlainTable to BlockBasedTabe, or vise versa. It can be created using NewAdaptiveTableFactory()
<!--truncate-->
Public API changes:

@ -11,14 +11,12 @@ Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/r
New Features in RocksDB 3.3:
* **JSON API prototype**.
* **Performance improvement on HashLinkList**: We addressed performance outlier of HashLinkList caused by skewed bucket by switching data in the bucket from linked list to skip list. Add parameter threshold_use_skiplist in NewHashLinkListRepFactory().
<!--truncate-->
* **More effective on storage space reclaim**: RocksDB is now able to reclaim storage space more effectively during the compaction process. This is done by compensating the size of each deletion entry by the 2X average value size, which makes compaction to be triggerred by deletion entries more easily.

@ -9,22 +9,12 @@ redirect_from:
## Introduction
We recently introduced a new [Cuckoo Hashing](http://en.wikipedia.org/wiki/Cuckoo_hashing) based SST file format which is optimized for fast point lookups. The new format was built for applications which require very high point lookup rates (~4Mqps) in read only mode but do not use operations like range scan, merge operator, etc. But, the existing RocksDB file formats were built to support range scan and other operations and the current best point lookup in RocksDB is 1.2 Mqps given by [PlainTable](https://github.com/facebook/rocksdb/wiki/PlainTable-Format)[ format](https://github.com/facebook/rocksdb/wiki/PlainTable-Format). This prompted a hashing based file format, which we present here. The new table format uses a cache friendly version of Cuckoo Hashing algorithm with only 1 or 2 memory accesses per lookup.
<!--truncate-->
Goals:
* Reduce memory accesses per lookup to 1 or 2
@ -34,90 +24,43 @@ Goals:
* Minimize database size
Assumptions:
* Key length and value length are fixed
* The database is operated in read only mode
Non-goals:
While optimizing the performance of Get() operation was our primary goal, compaction and build times were secondary. We may work on improving them in future.
* While optimizing the performance of Get() operation was our primary goal, compaction and build times were secondary. We may work on improving them in future.
Details for setting up the table format can be found in [GitHub](https://github.com/facebook/rocksdb/wiki/CuckooTable-Format).
## Cuckoo Hashing Algorithm
In order to achieve high lookup speeds, we did multiple optimizations, including a cache friendly cuckoo hash algorithm. Cuckoo Hashing uses multiple hash functions, _h1, ..., __hn._
### Original Cuckoo Hashing
To insert any new key _k_, we compute hashes of the key _h1(k), ..., __hn__(k)_. We insert the key in the first hash location that is free. If all the locations are blocked, we try to move one of the colliding keys to a different location by trying to re-insert it.
Finding smallest set of keys to displace in order to accommodate the new key is naturally a shortest path problem in a directed graph where nodes are buckets of hash table and there is an edge from bucket _A_ to bucket _B_ if the element stored in bucket _A_ can be accommodated in bucket _B_ using one of the hash functions. The source nodes are the possible hash locations for the given key _k_ and destination is any one of the empty buckets. We use this algorithm to handle collision.
To retrieve a key _k_, we compute hashes, _h1(k), ..., __hn__(k)_ and the key must be present in one of these locations.
Our goal is to minimize average (and maximum) number of hash functions required and hence the number of memory accesses. In our experiments, with a hash utilization of 90%, we found that the average number of lookups is 1.8 and maximum is 3. Around 44% of keys are accommodated in first hash location and 33% in second location.
### Cache Friendly Cuckoo Hashing
We noticed the following two sub-optimal properties in original Cuckoo implementation:
* If the key is not present in first hash location, we jump to second hash location which may not be in cache. This results in many cache misses.
@ -125,10 +68,7 @@ We noticed the following two sub-optimal properties in original Cuckoo implement
The solution is to insert more keys near first location. In case of collision in the first hash location - _h1(k)_, we try to insert it in next few buckets, _h1(k)+1, _h1(k)+2, _..., h1(k)+t-1_. If all of these _t_ locations are occupied, we skip over to next hash function _h2_ and repeat the process. We call the set of _t_ buckets as a _Cuckoo Block_. We chose _t_ such that size of a block is not bigger than a cache line and we prefetch the first cuckoo block.
With the new algorithm, for 90% hash utilization, we found that 85% of keys are accommodated in first Cuckoo Block. Prefetching the first cuckoo block yields best results. For a database of 100 million keys with key length 8 and value length 4, the hash algorithm alone can achieve 9.6 Mqps and we are working on improving it further. End to end RocksDB performance results can be found [here](https://github.com/facebook/rocksdb/wiki/CuckooTable-Format).

@ -9,99 +9,44 @@ redirect_from:
## Introduction
In this post, we are introducing "full filter block" --- a new bloom filter format for [block based table](https://github.com/facebook/rocksdb/wiki/Rocksdb-BlockBasedTable-Format). This could bring about 40% of improvement for key query under in-memory (all data stored in memory, files stored in tmpfs/ramfs, an [example](https://github.com/facebook/rocksdb/wiki/RocksDB-In-Memory-Workload-Performance-Benchmarks) workload. The main idea behind is to generate a big filter that covers all the keys in SST file to avoid lots of unnecessary memory look ups.
<!--truncate-->
## What is Bloom Filter
In brief, [bloom filter](https://github.com/facebook/rocksdb/wiki/RocksDB-Bloom-Filter) is a bits array generated for a set of keys that could tell if an arbitrary key may exist in that set.
In RocksDB, we generate such a bloom filter for each SST file. When we conduct a query for a key, we first goes to the bloom filter block of SST file. If key may exist in filter, we goes into data block in SST file to search for the key. If not, we would return directly. So it could help speed up point look up operation a lot.
## Original Bloom Filter Format
Original bloom filter creates filters for each individual data block in SST file. It has complex structure (ref [here](https://github.com/facebook/rocksdb/wiki/Rocksdb-BlockBasedTable-Format#filter-meta-block)) which results in a lot of non-adjacent memory look ups.
Here's the work flow for checking original bloom filter in block based table:
1. Given the target key, we goes to the index block to get the "data block ID" where this key may reside.
1. Using the "data block ID", we goes to the filter block and get the correct "offset of filter".
1. Using the "offset of filter", we goes to the actual filter and do the checking.
## New Bloom Filter Format
New bloom filter creates filter for all keys in SST file and we name it "full filter". The data structure of full filter is very simple, there is just one big filter:
    [ full filter ]
In this way, the work flow of bloom filter checking is much simplified.
(1) Given the target key, we goes directly to the filter block and conduct the filter checking.
To be specific, there would be no checking for index block and no address jumping inside of filter block.
Though it is a big filter, the total filter size would be the same as the original filter.
One little draw back is that the new bloom filter introduces more memory consumption when building SST file because we need to buffer keys (or their hashes) before generating filter. Original filter just creates a bunch of small filters so it just buffer a small amount of keys. For full filter, we buffer hashes of all keys, which would take more memory when SST file size increases.
## Usage & Customization
You can refer to the document here for [usage](https://github.com/facebook/rocksdb/wiki/RocksDB-Bloom-Filter#usage-of-new-bloom-filter) and [customization](https://github.com/facebook/rocksdb/wiki/RocksDB-Bloom-Filter#customize-your-own-filterpolicy).
 

@ -10,31 +10,19 @@ redirect_from:
New RocksDB release - 3.5!
**New Features**
1. Add include/utilities/write_batch_with_index.h, providing a utility class to query data out of WriteBatch when building it.
2. new ReadOptions.total_order_seek to force total order seek when block-based table is built with hash index.
<!--truncate-->
**Public API changes**
1. The Prefix Extractor used with V2 compaction filters is now passed user key to SliceTransform::Transform instead of unparsed RocksDB key.

@ -9,6 +9,8 @@ redirect_from:
If you have an existing application that uses LevelDB and would like to migrate to using RocksDB, one problem you need to overcome is to map the options for LevelDB to proper options for RocksDB. As of release 3.9 this can be automatically done by using our option conversion utility found in rocksdb/utilities/leveldb_options.h. What is needed, is to first replace `leveldb::Options` with `rocksdb::LevelDBOptions`. Then, use `rocksdb::ConvertOptions( )` to convert the `LevelDBOptions` struct into appropriate RocksDB options. Here is an example:
<!--truncate-->
LevelDB code:
```c++

@ -9,6 +9,8 @@ redirect_from:
RocksDB options can be provided using a file or any string to RocksDB. The format is straightforward: `write_buffer_size=1024;max_write_buffer_number=2`. Any whitespace around `=` and `;` is OK. Moreover, options can be nested as necessary. For example `BlockBasedTableOptions` can be nested as follows: `write_buffer_size=1024; max_write_buffer_number=2; block_based_table_factory={block_size=4k};`. Similarly any white space around `{` or `}` is ok. Here is what it looks like in code:
<!--truncate-->
```c++
#include <string>
#include "rocksdb/db.h"

@ -9,6 +9,8 @@ redirect_from:
RocksDB can be used as a storage engine of a higher level database. In fact, we are currently plugging RocksDB into MySQL and MongoDB as one of their storage engines. RocksDB can help with guaranteeing some of the ACID properties: durability is guaranteed by RocksDB by design; while consistency and isolation need to be enforced by concurrency controls on top of RocksDB; Atomicity can be implemented by committing a transaction's writes with one write batch to RocksDB in the end.
<!--truncate-->
However, if we enforce atomicity by only committing all writes in the end of the transaction in one batch, you cannot get the updated value from RocksDB previously written by the same transaction (read-your-own-write). To read the updated value, the databases on top of RocksDB need to maintain an internal buffer for all the written keys, and when a read happens they need to merge the result from RocksDB and from this buffer. This is a problem we faced when building the RocksDB storage engine in MongoDB. We solved it by creating a utility class, WriteBatchWithIndex (a write batch with a searchable index) and made it part of public API so that the community can also benefit from it.
Before talking about the index part, let me introduce write batch first. The write batch class, `WriteBatch`, is a RocksDB data structure for atomic writes of multiple keys. Users can buffer their updates to a `WriteBatch` by calling `write_batch.Put("key1", "value1")` or `write_batch.Delete("key2")`, similar as calling RocksDB's functions of the same names. In the end, they call `db->Write(write_batch)` to atomically update all those batched operations to the DB. It is how a database can guarantee atomicity, as shown above. Adding a searchable index to `WriteBatch`, we now have `WriteBatchWithIndex`. Users can put updates to WriteBatchIndex in the same way as to `WriteBatch`. In the end, users can get a `WriteBatch` object from it and issue `db->Write()`. Additionally, users can create an iterator of a WriteBatchWithIndex, seek to any key location and iterate from there.

@ -9,6 +9,8 @@ redirect_from:
Over the last couple of years, we have been busy integrating RocksDB with various services here at Facebook that needed to store key-value pairs locally. We have also seen other companies using RocksDB as local storage components of their distributed systems.
<!--truncate-->
The next big challenge for us is to bring RocksDB storage engine to general purpose databases. Today we have an exciting milestone to share with our community! We're running MongoDB with RocksDB in production and seeing great results! You can read more about it here: [http://blog.parse.com/announcements/mongodb-rocksdb-parse/](http://blog.parse.com/announcements/mongodb-rocksdb-parse/)
Keep tuned for benchmarks and more stability and performance improvements.

@ -9,6 +9,8 @@ redirect_from:
Every 6 months, RocksDB team gets together to prioritize the work ahead of us. We just went through this exercise and we wanted to share the results with the community. Here's what RocksDB team will be focusing on for the next 6 months:
<!--truncate-->
**MyRocks**
As you might know, we're working hard to integrate RocksDB as a storage engine for MySQL. This project is pretty important for us because we're heavy users of MySQL. We're already getting pretty good performance results, but there is more work to be done. We need to focus on both performance and stability. The most high priority items on are list are:

@ -9,6 +9,8 @@ redirect_from:
About a year ago, there was a need to develop a spatial database at Facebook. We needed to store and index Earth's map data. Before building our own, we looked at the existing spatial databases. They were all very good technology, but also general purpose. We could sacrifice a general-purpose API, so we thought we could build a more performant database, since it would be specifically designed for our use-case. Furthermore, we decided to build the spatial database on top of RocksDB, because we have a lot of operational experience with running and tuning RocksDB at a large scale.
<!--truncate-->
When we started looking at this project, the first thing that surprised us was that our planet is not that big. Earth's entire map data can fit in memory on a reasonably high-end machine. Thus, we also decided to build a spatial database optimized for memory-resident dataset.
The first use-case of our spatial database was an experimental map renderer. As part of our project, we successfully loaded [Open Street Maps](https://www.openstreetmap.org/) dataset and hooked it up with [Mapnik](http://mapnik.org/), a map rendering engine.

@ -11,6 +11,8 @@ Over the past 6 months we have seen a number of use cases where RocksDB is succe
We at Microsoft Bing could not be left behind. As a result we are happy to [announce](http://bit.ly/1OmWBT9) the availability of the Windows Port created here at Microsoft which we intend to use as a storage option for one of our key/value data stores.
<!--truncate-->
We are happy to make this available for the community. Keep tuned for more announcements to come.
### Comments

@ -9,29 +9,20 @@ redirect_from:
In this article, we follow up on the first part of an answer to one of the questions in our [AMA](https://www.reddit.com/r/IAmA/comments/3de3cv/we_are_rocksdb_engineering_team_ask_us_anything/ct4a8tb), the dynamic level size in level-based compaction.
<!--truncate-->
Level-based compaction is the original LevelDB compaction style and one of the two major compaction styles in RocksDB (See [our wiki](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics#multi-threaded-compactions)). In RocksDB we introduced parallelism and more configurable options to it but the main algorithm stayed the same, until we recently introduced the dynamic level size mode.
In level-based compaction, we organize data to different sorted runs, called levels. Each level has a target size.  Usually target size of levels increases by the same size multiplier. For example, you can set target size of level 1 to be 1GB, and size multiplier to be 10, and the target size of level 1, 2, 3, 4 will be 1GB, 10GB, 100GB and 1000GB. Before level 1, there will be some staging file flushed from mem tables, called Level 0 files, which will later be merged to level 1. Compactions will be triggered as soon as actual size of a level exceeds its target size. We will merge a subset of data of that level to next level, to reduce size of the level. More compactions will be triggered until sizes of all the levels are lower than their target sizes. In a steady state, the size of each level will be around the same size of the size of level targets.
Level-based compaction’s advantage is its good space efficiency. We usually use the metric space amplification to measure the space efficiency. In this article ignore the effects of data compression so space amplification= size_on_file_system / size_of_user_data.
How do we estimate space amplification of level-based compaction? We focus specifically on the databases in steady state, which means database size is stable or grows slowly over time. This means updates will add roughly the same or little more data than what is removed by deletes. Given that, if we compact all the data all to the last level, the size of level will be equal as the size of last level before the compaction. On the other hand, the size of user data will be approximately the size of DB if we compact all the levels down to the last level. So the size of the last level will be a good estimation of user data size. So total size of the DB divided by the size of the last level will be a good estimation of space amplification.
Applying the equation, if we have four non-zero levels, their sizes are 1GB, 10GB, 100GB, 1000GB, the size amplification will be approximately (1000GB + 100GB + 10GB + 1GB) / 1000GB = 1.111, which is a very good number. However, there is a catch here: how to make sure the last level’s size is 1000GB, the same as the level’s size target? A user has to fine tune level sizes to achieve this number and will need to re-tune if DB size changes. The theoretic number 1.11 is hard to achieve in practice. In a worse case, if you have the target size of last level to be 1000GB but the user data is only 200GB, then the actual space amplification will be (200GB + 100GB + 10GB + 1GB) / 200GB = 1.555, a much worse number.

@ -9,6 +9,8 @@ redirect_from:
We recently added a new API, called `GetThreadList()`, that exposes the RocksDB background thread activity. With this feature, developers will be able to obtain the real-time information about the currently running compactions and flushes such as the input / output size, elapsed time, the number of bytes it has written. Below is an example output of `GetThreadList`. To better illustrate the example, we have put a sample output of `GetThreadList` into a table where each column represents a thread status:
<!--truncate-->
<table width="637" >
<tbody >
<tr style="border:2px solid #000000" >

@ -9,7 +9,7 @@ redirect_from:
**Checkpoint** is a feature in RocksDB which provides the ability to take a snapshot of a running RocksDB database in a separate directory. Checkpoints can be used as a point in time snapshot, which can be opened Read-only to query rows as of the point in time or as a Writeable snapshot by opening it Read-Write. Checkpoints can be used for both full and incremental backups.
<!--truncate-->
The Checkpoint feature enables RocksDB to create a consistent snapshot of a given RocksDB database in the specified directory. If the snapshot is on the same filesystem as the original database, the SST files will be hard-linked, otherwise SST files will be copied. The manifest and CURRENT files will be copied. In addition, if there are multiple column families, log files will be copied for the period covering the start and end of the checkpoint, in order to provide a consistent snapshot across column families.

@ -11,6 +11,8 @@ In many use cases of RocksDB, people rely on OS page cache for caching compresse
As an example, a user can tune the DB as following: use level-based compaction, with L1 - L4 sizes to be 1GB, 10GB, 100GB and 1TB. And they reserve about 20GB memory as OS page cache, expecting level 0, 1 and 2 are mostly cached in memory, leaving only reads from level 3 and 4 requiring disk I/Os. However, in practice, it's not easy to verify whether OS page cache does exactly what we expect. For example, if we end up with doing 4 instead of 2 I/Os per query, it's not easy for users to figure out whether the it's because of efficiency of OS page cache or reading multiple blocks for a level. Analysis like it is especially important if users run RocksDB on hard drive disks, for the gap of latency between hard drives and memory is much higher than flash-based SSDs.
<!--truncate-->
In order to make tuning easier, we added new instrumentation to help users analysis latency distribution of file reads in different levels. If users turn DB statistics on, we always keep track of distribution of file read latency for each level. Users can retrieve the information by querying DB property “rocksdb.stats” ( [https://github.com/facebook/rocksdb/blob/v3.13.1/include/rocksdb/db.h#L315-L316](https://github.com/facebook/rocksdb/blob/v3.13.1/include/rocksdb/db.h#L315-L316) ). It will also printed out as a part of compaction summary in info logs periodically.
The output looks like this:

@ -9,6 +9,8 @@ redirect_from:
The most popular compaction style of RocksDB is level-based compaction, which is an improved version of LevelDB's compaction algorithm. Page 9- 16 of this [slides](https://github.com/facebook/rocksdb/blob/gh-pages/talks/2015-09-29-HPTS-Siying-RocksDB.pdf) gives an illustrated introduction of this compaction style. The basic idea that: data is organized by multiple levels with exponential increasing target size. Except a special level 0, every level is key-range partitioned into many files. When size of a level exceeds its target size, we pick one or more of its files, and merge the file into the next level.
<!--truncate-->
Which file to pick to compact is an interesting question. LevelDB only uses one thread for compaction and it always picks files in round robin manner. We implemented multi-thread compaction in RocksDB by picking multiple files from the same level and compact them in parallel. We had to move away from LevelDB's file picking approach. Recently, we created an option [options.compaction_pri](https://github.com/facebook/rocksdb/blob/d6c838f1e130d8860407bc771fa6d4ac238859ba/include/rocksdb/options.h#L83-L93), which indicated three different algorithms to pick files to compact.
Why do we need to multiple algorithms to choose from? Because there are different factors to consider when picking the files, and we now don't yet know how to balance them automatically, so we expose it to users to choose. Here are factors to consider:

@ -9,10 +9,6 @@ redirect_from:
In RocksDB 4.3, we added a new set of features that makes managing RocksDB options easier. Specifically:
* **Persisting Options Automatically**: Each RocksDB database will now automatically persist its current set of options into an INI file on every successful call of DB::Open(), SetOptions(), and CreateColumnFamily() / DropColumnFamily().
@ -23,6 +19,6 @@ In RocksDB 4.3, we added a new set of features that makes managing RocksDB optio
* **Sanity Check Options**: We added [CheckOptionsCompatibility](https://github.com/facebook/rocksdb/blob/4.3.fb/include/rocksdb/utilities/options_util.h#L64-L77) that performs compatibility check on two sets of RocksDB options.
<!--truncate-->
Want to know more about how to use this new features? Check out the [RocksDB Options File wiki page](https://github.com/facebook/rocksdb/wiki/RocksDB-Options-File) and start using this new feature today!

@ -30,6 +30,8 @@ redirect_from:
<br/>
<!--truncate-->
## 4.4.0 (1/14/2016)
### Public API Changes

@ -22,6 +22,8 @@ redirect_from:
<br/>
<!--truncate-->
## [](https://github.com/facebook/rocksdb/blob/master/HISTORY.md#470-482016)4.7.0 (4/8/2016)
### [](https://github.com/facebook/rocksdb/blob/master/HISTORY.md#public-api-change-2)Public API Change

@ -13,6 +13,8 @@ We abandoned release candidates 4.10.x and directly go to 4.11.2 from 4.9, to ma
* Segfault when failing to open an SST file for read-ahead iterators.
* WAL without data for all CFs is not deleted after recovery.
<!--truncate-->
## 4.11.1 (8/30/2016)
### Bug Fixes

Loading…
Cancel
Save