Add redirects from old blog posts link to new format

Summary:
The new blog post links will be formatted differently coming over to gh-pages. But
we can redirect from the old style over to the new style for existing blog posts.

Test Plan:
Visual

https://www.facebook.com/pxlcld/pvWQ

Reviewers: lgalanis, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D63513
main
Joel Marcey 8 years ago
parent 607628d349
commit 1ec75ee76b
  1. 1
      docs/Gemfile
  2. 12
      docs/Gemfile.lock
  3. 3
      docs/_config.yml
  4. 2
      docs/_posts/2014-03-27-how-to-backup-rocksdb.markdown
  5. 2
      docs/_posts/2014-03-27-how-to-persist-in-memory-rocksdb-database.markdown
  6. 2
      docs/_posts/2014-04-02-the-1st-rocksdb-local-meetup-held-on-march-27-2014.markdown
  7. 2
      docs/_posts/2014-04-07-rocksdb-2-8-release.markdown
  8. 2
      docs/_posts/2014-04-21-indexing-sst-files-for-better-lookup-performance.markdown
  9. 2
      docs/_posts/2014-05-14-lock.markdown
  10. 2
      docs/_posts/2014-05-19-rocksdb-3-0-release.markdown
  11. 2
      docs/_posts/2014-05-22-rocksdb-3-1-release.markdown
  12. 2
      docs/_posts/2014-06-23-plaintable-a-new-file-format.markdown
  13. 2
      docs/_posts/2014-06-27-avoid-expensive-locks-in-get.markdown
  14. 2
      docs/_posts/2014-06-27-rocksdb-3-2-release.markdown
  15. 2
      docs/_posts/2014-07-29-rocksdb-3-3-release.markdown
  16. 2
      docs/_posts/2014-09-12-cuckoo.markdown
  17. 2
      docs/_posts/2014-09-12-new-bloom-filter-format.markdown
  18. 2
      docs/_posts/2014-09-15-rocksdb-3-5-release.markdown
  19. 2
      docs/_posts/2015-01-16-migrating-from-leveldb-to-rocksdb-2.markdown
  20. 2
      docs/_posts/2015-02-24-reading-rocksdb-options-from-a-file.markdown
  21. 14
      docs/_posts/2015-02-27-write-batch-with-index.markdown
  22. 2
      docs/_posts/2015-04-22-integrating-rocksdb-with-mongodb-2.markdown
  23. 2
      docs/_posts/2015-06-12-rocksdb-in-osquery.markdown
  24. 2
      docs/_posts/2015-07-15-rocksdb-2015-h2-roadmap.markdown
  25. 2
      docs/_posts/2015-07-17-spatial-indexing-in-rocksdb.markdown
  26. 2
      docs/_posts/2015-07-22-rocksdb-is-now-available-in-windows-platform.markdown
  27. 12
      docs/_posts/2015-07-23-dynamic-level.markdown
  28. 2
      docs/_posts/2015-10-27-getthreadlist.markdown
  29. 2
      docs/_posts/2015-11-10-use-checkpoints-for-efficient-snapshots.markdown
  30. 2
      docs/_posts/2015-11-16-analysis-file-read-latency-by-level.markdown
  31. 2
      docs/_posts/2016-01-29-compaction_pri.markdown
  32. 2
      docs/_posts/2016-02-24-rocksdb-4-2-release.markdown
  33. 2
      docs/_posts/2016-02-25-rocksdb-ama.markdown
  34. 2
      docs/_posts/2016-03-07-rocksdb-options-file.markdown
  35. 2
      docs/_posts/2016-04-26-rocksdb-4-5-1-released.markdown
  36. 2
      docs/_posts/2016-07-26-rocksdb-4-8-released.markdown

@ -1,2 +1,3 @@
source 'https://rubygems.org' source 'https://rubygems.org'
gem 'github-pages', group: :jekyll_plugins gem 'github-pages', group: :jekyll_plugins
gem 'jekyll-redirect-from'

@ -21,7 +21,7 @@ GEM
ffi (1.9.14) ffi (1.9.14)
forwardable-extended (2.6.0) forwardable-extended (2.6.0)
gemoji (2.1.0) gemoji (2.1.0)
github-pages (93) github-pages (94)
activesupport (= 4.2.7) activesupport (= 4.2.7)
github-pages-health-check (= 1.2.0) github-pages-health-check (= 1.2.0)
jekyll (= 3.2.1) jekyll (= 3.2.1)
@ -29,7 +29,7 @@ GEM
jekyll-feed (= 0.5.1) jekyll-feed (= 0.5.1)
jekyll-gist (= 1.4.0) jekyll-gist (= 1.4.0)
jekyll-github-metadata (= 2.0.2) jekyll-github-metadata (= 2.0.2)
jekyll-mentions (= 1.1.3) jekyll-mentions (= 1.2.0)
jekyll-paginate (= 1.1.0) jekyll-paginate (= 1.1.0)
jekyll-redirect-from (= 0.11.0) jekyll-redirect-from (= 0.11.0)
jekyll-sass-converter (= 1.3.0) jekyll-sass-converter (= 1.3.0)
@ -71,7 +71,8 @@ GEM
jekyll-github-metadata (2.0.2) jekyll-github-metadata (2.0.2)
jekyll (~> 3.1) jekyll (~> 3.1)
octokit (~> 4.0) octokit (~> 4.0)
jekyll-mentions (1.1.3) jekyll-mentions (1.2.0)
activesupport (~> 4.0)
html-pipeline (~> 2.3) html-pipeline (~> 2.3)
jekyll (~> 3.0) jekyll (~> 3.0)
jekyll-paginate (1.1.0) jekyll-paginate (1.1.0)
@ -119,18 +120,21 @@ GEM
sawyer (0.7.0) sawyer (0.7.0)
addressable (>= 2.3.5, < 2.5) addressable (>= 2.3.5, < 2.5)
faraday (~> 0.8, < 0.10) faraday (~> 0.8, < 0.10)
terminal-table (1.6.0) terminal-table (1.7.0)
unicode-display_width (~> 1.1)
thread_safe (0.3.5) thread_safe (0.3.5)
typhoeus (0.8.0) typhoeus (0.8.0)
ethon (>= 0.8.0) ethon (>= 0.8.0)
tzinfo (1.2.2) tzinfo (1.2.2)
thread_safe (~> 0.1) thread_safe (~> 0.1)
unicode-display_width (1.1.0)
PLATFORMS PLATFORMS
ruby ruby
DEPENDENCIES DEPENDENCIES
github-pages github-pages
jekyll-redirect-from
BUNDLED WITH BUNDLED WITH
1.12.5 1.12.5

@ -57,3 +57,6 @@ sass:
redcarpet: redcarpet:
extensions: [with_toc_data] extensions: [with_toc_data]
gems:
- jekyll-redirect-from

@ -3,6 +3,8 @@ title: How to backup RocksDB?
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/191/how-to-backup-rocksdb/
--- ---
In RocksDB, we have implemented an easy way to backup your DB. Here is a simple example: In RocksDB, we have implemented an easy way to backup your DB. Here is a simple example:

@ -3,6 +3,8 @@ title: How to persist in-memory RocksDB database?
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/245/how-to-persist-in-memory-rocksdb-database/
--- ---
In recent months, we have focused on optimizing RocksDB for in-memory workloads. With growing RAM sizes and strict low-latency requirements, lots of applications decide to keep their entire data in memory. Running in-memory database with RocksDB is easy -- just mount your RocksDB directory on tmpfs or ramfs [1]. Even if the process crashes, RocksDB can recover all of your data from in-memory filesystem. However, what happens if the machine reboots? In recent months, we have focused on optimizing RocksDB for in-memory workloads. With growing RAM sizes and strict low-latency requirements, lots of applications decide to keep their entire data in memory. Running in-memory database with RocksDB is easy -- just mount your RocksDB directory on tmpfs or ramfs [1]. Even if the process crashes, RocksDB can recover all of your data from in-memory filesystem. However, what happens if the machine reboots?

@ -3,6 +3,8 @@ title: The 1st RocksDB Local Meetup Held on March 27, 2014
layout: post layout: post
author: xjin author: xjin
category: blog category: blog
redirect_from:
- /blog/323/the-1st-rocksdb-local-meetup-held-on-march-27-2014/
--- ---
On Mar 27, 2014, RocksDB team @ Facebook held the 1st RocksDB local meetup in FB HQ (Menlo Park, California). We invited around 80 guests from 20+ local companies, including LinkedIn, Twitter, Dropbox, Square, Pinterest, MapR, Microsoft and IBM. Finally around 50 guests showed up, totaling around 60% show-up rate. On Mar 27, 2014, RocksDB team @ Facebook held the 1st RocksDB local meetup in FB HQ (Menlo Park, California). We invited around 80 guests from 20+ local companies, including LinkedIn, Twitter, Dropbox, Square, Pinterest, MapR, Microsoft and IBM. Finally around 50 guests showed up, totaling around 60% show-up rate.

@ -3,6 +3,8 @@ title: RocksDB 2.8 release
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/371/rocksdb-2-8-release/
--- ---
Check out the new RocksDB 2.8 release on [Github](https://github.com/facebook/rocksdb/releases/tag/2.8.fb). Check out the new RocksDB 2.8 release on [Github](https://github.com/facebook/rocksdb/releases/tag/2.8.fb).

@ -3,6 +3,8 @@ title: Indexing SST Files for Better Lookup Performance
layout: post layout: post
author: leijin author: leijin
category: blog category: blog
redirect_from:
- /blog/431/indexing-sst-files-for-better-lookup-performance/
--- ---
For a `Get()` request, RocksDB goes through mutable memtable, list of immutable memtables, and SST files to look up the target key. SST files are organized in levels. For a `Get()` request, RocksDB goes through mutable memtable, list of immutable memtables, and SST files to look up the target key. SST files are organized in levels.

@ -3,6 +3,8 @@ title: Reducing Lock Contention in RocksDB
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/521/lock/
--- ---
In this post, we briefly introduce the recent improvements we did to RocksDB to improve the issue of lock contention costs. In this post, we briefly introduce the recent improvements we did to RocksDB to improve the issue of lock contention costs.

@ -3,6 +3,8 @@ title: RocksDB 3.0 release
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/557/rocksdb-3-0-release/
--- ---
Check out new RocksDB release on [Github](https://github.com/facebook/rocksdb/releases/tag/3.0.fb)! Check out new RocksDB release on [Github](https://github.com/facebook/rocksdb/releases/tag/3.0.fb)!

@ -3,6 +3,8 @@ title: RocksDB 3.1 release
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/575/rocksdb-3-1-release/
--- ---
Check out the new release on [Github](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.1)! Check out the new release on [Github](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.1)!

@ -3,6 +3,8 @@ title: PlainTable — A New File Format
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/599/plaintable-a-new-file-format/
--- ---
In this post, we are introducing "PlainTable" -- a file format we designed for RocksDB, initially to satisfy a production use case at Facebook. In this post, we are introducing "PlainTable" -- a file format we designed for RocksDB, initially to satisfy a production use case at Facebook.

@ -3,6 +3,8 @@ title: Avoid Expensive Locks in Get()
layout: post layout: post
author: leijin author: leijin
category: blog category: blog
redirect_from:
- /blog/677/avoid-expensive-locks-in-get/
--- ---
As promised in the previous [blog post](blog/2014/05/14/lock.html)! As promised in the previous [blog post](blog/2014/05/14/lock.html)!

@ -3,6 +3,8 @@ title: RocksDB 3.2 release
layout: post layout: post
author: leijin author: leijin
category: blog category: blog
redirect_from:
- /blog/647/rocksdb-3-2-release/
--- ---
Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.2)! Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.2)!

@ -3,6 +3,8 @@ title: RocksDB 3.3 Release
layout: post layout: post
author: yhciang author: yhciang
category: blog category: blog
redirect_from:
- /blog/1301/rocksdb-3-3-release/
--- ---
Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.3)! Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.3)!

@ -3,6 +3,8 @@ title: Cuckoo Hashing Table Format
layout: post layout: post
author: radheshyam author: radheshyam
category: blog category: blog
redirect_from:
- /blog/1427/new-bloom-filter-format/
--- ---
## Introduction ## Introduction

@ -3,6 +3,8 @@ title: New Bloom Filter Format
layout: post layout: post
author: zagfox author: zagfox
category: blog category: blog
redirect_from:
- /blog/1367/cuckoo/
--- ---
## Introduction ## Introduction

@ -3,6 +3,8 @@ title: RocksDB 3.5 Release!
layout: post layout: post
author: leijin author: leijin
category: blog category: blog
redirect_from:
- /blog/1547/rocksdb-3-5-release/
--- ---
New RocksDB release - 3.5! New RocksDB release - 3.5!

@ -3,6 +3,8 @@ title: Migrating from LevelDB to RocksDB
layout: post layout: post
author: lgalanis author: lgalanis
category: blog category: blog
redirect_from:
- /blog/1811/migrating-from-leveldb-to-rocksdb-2/
--- ---
If you have an existing application that uses LevelDB and would like to migrate to using RocksDB, one problem you need to overcome is to map the options for LevelDB to proper options for RocksDB. As of release 3.9 this can be automatically done by using our option conversion utility found in rocksdb/utilities/leveldb_options.h. What is needed, is to first replace `leveldb::Options` with `rocksdb::LevelDBOptions`. Then, use `rocksdb::ConvertOptions( )` to convert the `LevelDBOptions` struct into appropriate RocksDB options. Here is an example: If you have an existing application that uses LevelDB and would like to migrate to using RocksDB, one problem you need to overcome is to map the options for LevelDB to proper options for RocksDB. As of release 3.9 this can be automatically done by using our option conversion utility found in rocksdb/utilities/leveldb_options.h. What is needed, is to first replace `leveldb::Options` with `rocksdb::LevelDBOptions`. Then, use `rocksdb::ConvertOptions( )` to convert the `LevelDBOptions` struct into appropriate RocksDB options. Here is an example:

@ -3,6 +3,8 @@ title: Reading RocksDB options from a file
layout: post layout: post
author: lgalanis author: lgalanis
category: blog category: blog
redirect_from:
- /blog/1883/reading-rocksdb-options-from-a-file/
--- ---
RocksDB options can be provided using a file or any string to RocksDB. The format is straightforward: `write_buffer_size=1024;max_write_buffer_number=2`. Any whitespace around `=` and `;` is OK. Moreover, options can be nested as necessary. For example `BlockBasedTableOptions` can be nested as follows: `write_buffer_size=1024; max_write_buffer_number=2; block_based_table_factory={block_size=4k};`. Similarly any white space around `{` or `}` is ok. Here is what it looks like in code: RocksDB options can be provided using a file or any string to RocksDB. The format is straightforward: `write_buffer_size=1024;max_write_buffer_number=2`. Any whitespace around `=` and `;` is OK. Moreover, options can be nested as necessary. For example `BlockBasedTableOptions` can be nested as follows: `write_buffer_size=1024; max_write_buffer_number=2; block_based_table_factory={block_size=4k};`. Similarly any white space around `{` or `}` is ok. Here is what it looks like in code:

@ -3,14 +3,16 @@ title: 'WriteBatchWithIndex: Utility for Implementing Read-Your-Own-Writes'
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/1901/write-batch-with-index/
--- ---
RocksDB can be used as a storage engine of a higher level database. In fact, we are currently plugging RocksDB into MySQL and MongoDB as one of their storage engines. RocksDB can help with guaranteeing some of the ACID properties: durability is guaranteed by RocksDB by design; while consistency and isolation need to be enforced by concurrency controls on top of RocksDB; Atomicity can be implemented by committing a transaction's writes with one write batch to RocksDB in the end. RocksDB can be used as a storage engine of a higher level database. In fact, we are currently plugging RocksDB into MySQL and MongoDB as one of their storage engines. RocksDB can help with guaranteeing some of the ACID properties: durability is guaranteed by RocksDB by design; while consistency and isolation need to be enforced by concurrency controls on top of RocksDB; Atomicity can be implemented by committing a transaction's writes with one write batch to RocksDB in the end.
However, if we enforce atomicity by only committing all writes in the end of the transaction in one batch, you cannot get the updated value from RocksDB previously written by the same transaction (read-your-own-write). To read the updated value, the databases on top of RocksDB need to maintain an internal buffer for all the written keys, and when a read happens they need to merge the result from RocksDB and from this buffer. This is a problem we faced when building the RocksDB storage engine in MongoDB. We solved it by creating a utility class, WriteBatchWithIndex (a write batch with a searchable index) and made it part of public API so that the community can also benefit from it. However, if we enforce atomicity by only committing all writes in the end of the transaction in one batch, you cannot get the updated value from RocksDB previously written by the same transaction (read-your-own-write). To read the updated value, the databases on top of RocksDB need to maintain an internal buffer for all the written keys, and when a read happens they need to merge the result from RocksDB and from this buffer. This is a problem we faced when building the RocksDB storage engine in MongoDB. We solved it by creating a utility class, WriteBatchWithIndex (a write batch with a searchable index) and made it part of public API so that the community can also benefit from it.
Before talking about the index part, let me introduce write batch first. The write batch class, `WriteBatch`, is a RocksDB data structure for atomic writes of multiple keys. Users can buffer their updates to a `WriteBatch` by calling `write_batch.Put("key1", "value1")` or `write_batch.Delete("key2")`, similar as calling RocksDB's functions of the same names. In the end, they call `db->Write(write_batch)` to atomically update all those batched operations to the DB. It is how a database can guarantee atomicity, as shown above. Adding a searchable index to `WriteBatch`, we now have `WriteBatchWithIndex`. Users can put updates to WriteBatchIndex in the same way as to `WriteBatch`. In the end, users can get a `WriteBatch` object from it and issue `db->Write()`. Additionally, users can create an iterator of a WriteBatchWithIndex, seek to any key location and iterate from there. Before talking about the index part, let me introduce write batch first. The write batch class, `WriteBatch`, is a RocksDB data structure for atomic writes of multiple keys. Users can buffer their updates to a `WriteBatch` by calling `write_batch.Put("key1", "value1")` or `write_batch.Delete("key2")`, similar as calling RocksDB's functions of the same names. In the end, they call `db->Write(write_batch)` to atomically update all those batched operations to the DB. It is how a database can guarantee atomicity, as shown above. Adding a searchable index to `WriteBatch`, we now have `WriteBatchWithIndex`. Users can put updates to WriteBatchIndex in the same way as to `WriteBatch`. In the end, users can get a `WriteBatch` object from it and issue `db->Write()`. Additionally, users can create an iterator of a WriteBatchWithIndex, seek to any key location and iterate from there.
To implement read-your-own-write using `WriteBatchWithIndex`, every time the user creates a transaction, we create a `WriteBatchWithIndex` attached to it. All the writes of the transaction go to the `WriteBatchWithIndex` first. When we commit the transaction, we atomically write the batch to RocksDB. When the user wants to call `Get()`, we first check if the value exists in the `WriteBatchWithIndex` and return the value if existing, by seeking and reading from an iterator of the write batch, before checking data in RocksDB. For example, here is the we implement it in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L245-L260). If a range query comes, we pass a DB's iterator to `WriteBatchWithIndex`, which creates a super iterator which combines the results from the DB iterator with the batch's iterator. Using this super iterator, we can iterate the DB with the transaction's own writes. Here is the iterator creation codes in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L266-L269). In this way, the database can solve the read-your-own-write problem by using RocksDB to handle a transaction's uncommitted writes. To implement read-your-own-write using `WriteBatchWithIndex`, every time the user creates a transaction, we create a `WriteBatchWithIndex` attached to it. All the writes of the transaction go to the `WriteBatchWithIndex` first. When we commit the transaction, we atomically write the batch to RocksDB. When the user wants to call `Get()`, we first check if the value exists in the `WriteBatchWithIndex` and return the value if existing, by seeking and reading from an iterator of the write batch, before checking data in RocksDB. For example, here is the we implement it in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L245-L260). If a range query comes, we pass a DB's iterator to `WriteBatchWithIndex`, which creates a super iterator which combines the results from the DB iterator with the batch's iterator. Using this super iterator, we can iterate the DB with the transaction's own writes. Here is the iterator creation codes in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L266-L269). In this way, the database can solve the read-your-own-write problem by using RocksDB to handle a transaction's uncommitted writes.
Using `WriteBatchWithIndex`, we successfully implemented read-your-own-writes in the RocksDB storage engine of MongoDB. If you also have a read-your-own-write problem, `WriteBatchWithIndex` can help you implement it quickly and correctly. Using `WriteBatchWithIndex`, we successfully implemented read-your-own-writes in the RocksDB storage engine of MongoDB. If you also have a read-your-own-write problem, `WriteBatchWithIndex` can help you implement it quickly and correctly.

@ -3,6 +3,8 @@ title: Integrating RocksDB with MongoDB
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/1967/integrating-rocksdb-with-mongodb-2/
--- ---
Over the last couple of years, we have been busy integrating RocksDB with various services here at Facebook that needed to store key-value pairs locally. We have also seen other companies using RocksDB as local storage components of their distributed systems. Over the last couple of years, we have been busy integrating RocksDB with various services here at Facebook that needed to store key-value pairs locally. We have also seen other companies using RocksDB as local storage components of their distributed systems.

@ -3,6 +3,8 @@ title: RocksDB in osquery
layout: post layout: post
author: icanadi author: icanadi
category: lgalanis category: lgalanis
redirect_from:
- /blog/1997/rocksdb-in-osquery/
--- ---
Check out [this](https://code.facebook.com/posts/1411870269134471/how-rocksdb-is-used-in-osquery/) blog post by [Mike Arpaia](https://www.facebook.com/mike.arpaia) and [Ted Reed](https://www.facebook.com/treeded) about how osquery leverages RocksDB to build an embedded pub-sub system. This article is a great read and contains insights on how to properly use RocksDB. Check out [this](https://code.facebook.com/posts/1411870269134471/how-rocksdb-is-used-in-osquery/) blog post by [Mike Arpaia](https://www.facebook.com/mike.arpaia) and [Ted Reed](https://www.facebook.com/treeded) about how osquery leverages RocksDB to build an embedded pub-sub system. This article is a great read and contains insights on how to properly use RocksDB.

@ -3,6 +3,8 @@ title: RocksDB 2015 H2 roadmap
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/2015/rocksdb-2015-h2-roadmap/
--- ---
Every 6 months, RocksDB team gets together to prioritize the work ahead of us. We just went through this exercise and we wanted to share the results with the community. Here's what RocksDB team will be focusing on for the next 6 months: Every 6 months, RocksDB team gets together to prioritize the work ahead of us. We just went through this exercise and we wanted to share the results with the community. Here's what RocksDB team will be focusing on for the next 6 months:

@ -3,6 +3,8 @@ title: Spatial indexing in RocksDB
layout: post layout: post
author: icanadi author: icanadi
category: blog category: blog
redirect_from:
- /blog/2039/spatial-indexing-in-rocksdb/
--- ---
About a year ago, there was a need to develop a spatial database at Facebook. We needed to store and index Earth's map data. Before building our own, we looked at the existing spatial databases. They were all very good technology, but also general purpose. We could sacrifice a general-purpose API, so we thought we could build a more performant database, since it would be specifically designed for our use-case. Furthermore, we decided to build the spatial database on top of RocksDB, because we have a lot of operational experience with running and tuning RocksDB at a large scale. About a year ago, there was a need to develop a spatial database at Facebook. We needed to store and index Earth's map data. Before building our own, we looked at the existing spatial databases. They were all very good technology, but also general purpose. We could sacrifice a general-purpose API, so we thought we could build a more performant database, since it would be specifically designed for our use-case. Furthermore, we decided to build the spatial database on top of RocksDB, because we have a lot of operational experience with running and tuning RocksDB at a large scale.

@ -3,6 +3,8 @@ title: RocksDB is now available in Windows Platform
layout: post layout: post
author: dmitrism author: dmitrism
category: blog category: blog
redirect_from:
- /blog/2033/rocksdb-is-now-available-in-windows-platform/
--- ---
Over the past 6 months we have seen a number of use cases where RocksDB is successfully used by the community and various companies to achieve high throughput and volume in a modern server environment. Over the past 6 months we have seen a number of use cases where RocksDB is successfully used by the community and various companies to achieve high throughput and volume in a modern server environment.

@ -3,19 +3,21 @@ title: Dynamic Level Size for Level-Based Compaction
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/2207/dynamic-level/
--- ---
In this article, we follow up on the first part of an answer to one of the questions in our [AMA](https://www.reddit.com/r/IAmA/comments/3de3cv/we_are_rocksdb_engineering_team_ask_us_anything/ct4a8tb), the dynamic level size in level-based compaction. In this article, we follow up on the first part of an answer to one of the questions in our [AMA](https://www.reddit.com/r/IAmA/comments/3de3cv/we_are_rocksdb_engineering_team_ask_us_anything/ct4a8tb), the dynamic level size in level-based compaction.
Level-based compaction is the original LevelDB compaction style and one of the two major compaction styles in RocksDB (See [our wiki](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics#multi-threaded-compactions)). In RocksDB we introduced parallelism and more configurable options to it but the main algorithm stayed the same, until we recently introduced the dynamic level size mode. Level-based compaction is the original LevelDB compaction style and one of the two major compaction styles in RocksDB (See [our wiki](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics#multi-threaded-compactions)). In RocksDB we introduced parallelism and more configurable options to it but the main algorithm stayed the same, until we recently introduced the dynamic level size mode.
In level-based compaction, we organize data to different sorted runs, called levels. Each level has a target size.  Usually target size of levels increases by the same size multiplier. For example, you can set target size of level 1 to be 1GB, and size multiplier to be 10, and the target size of level 1, 2, 3, 4 will be 1GB, 10GB, 100GB and 1000GB. Before level 1, there will be some staging file flushed from mem tables, called Level 0 files, which will later be merged to level 1. Compactions will be triggered as soon as actual size of a level exceeds its target size. We will merge a subset of data of that level to next level, to reduce size of the level. More compactions will be triggered until sizes of all the levels are lower than their target sizes. In a steady state, the size of each level will be around the same size of the size of level targets. In level-based compaction, we organize data to different sorted runs, called levels. Each level has a target size.  Usually target size of levels increases by the same size multiplier. For example, you can set target size of level 1 to be 1GB, and size multiplier to be 10, and the target size of level 1, 2, 3, 4 will be 1GB, 10GB, 100GB and 1000GB. Before level 1, there will be some staging file flushed from mem tables, called Level 0 files, which will later be merged to level 1. Compactions will be triggered as soon as actual size of a level exceeds its target size. We will merge a subset of data of that level to next level, to reduce size of the level. More compactions will be triggered until sizes of all the levels are lower than their target sizes. In a steady state, the size of each level will be around the same size of the size of level targets.
@ -30,7 +32,7 @@ How do we estimate space amplification of level-based compaction? We focus speci
Applying the equation, if we have four non-zero levels, their sizes are 1GB, 10GB, 100GB, 1000GB, the size amplification will be approximately (1000GB + 100GB + 10GB + 1GB) / 1000GB = 1.111, which is a very good number. However, there is a catch here: how to make sure the last level’s size is 1000GB, the same as the level’s size target? A user has to fine tune level sizes to achieve this number and will need to re-tune if DB size changes. The theoretic number 1.11 is hard to achieve in practice. In a worse case, if you have the target size of last level to be 1000GB but the user data is only 200GB, then the actual space amplification will be (200GB + 100GB + 10GB + 1GB) / 200GB = 1.555, a much worse number. Applying the equation, if we have four non-zero levels, their sizes are 1GB, 10GB, 100GB, 1000GB, the size amplification will be approximately (1000GB + 100GB + 10GB + 1GB) / 1000GB = 1.111, which is a very good number. However, there is a catch here: how to make sure the last level’s size is 1000GB, the same as the level’s size target? A user has to fine tune level sizes to achieve this number and will need to re-tune if DB size changes. The theoretic number 1.11 is hard to achieve in practice. In a worse case, if you have the target size of last level to be 1000GB but the user data is only 200GB, then the actual space amplification will be (200GB + 100GB + 10GB + 1GB) / 200GB = 1.555, a much worse number.
To solve this problem, my colleague Igor Kabiljo came up with a solution of dynamic level size target mode. You can enable it by setting options.level_compaction_dynamic_level_bytes=true. In this mode, size target of levels are changed dynamically based on size of the last level. Suppose the level size multiplier to be 10, and the DB size is 200GB. The target size of the last level is automatically set to be the actual size of the level, which is 200GB, the second to last level’s size target will be automatically set to be size_last_level / 10 = 20GB, the third last level’s will be size_last_level/100 = 2GB, and next level to be size_last_level/1000 = 200MB. We stop here because 200MB is within the range of the first level. In this way, we can achieve the 1.111 space amplification, without fine tuning of the level size targets. More details can be found in [code comments of the option](https://github.com/facebook/rocksdb/blob/v3.11/include/rocksdb/options.h#L366-L423) in the header file. To solve this problem, my colleague Igor Kabiljo came up with a solution of dynamic level size target mode. You can enable it by setting options.level_compaction_dynamic_level_bytes=true. In this mode, size target of levels are changed dynamically based on size of the last level. Suppose the level size multiplier to be 10, and the DB size is 200GB. The target size of the last level is automatically set to be the actual size of the level, which is 200GB, the second to last level’s size target will be automatically set to be size_last_level / 10 = 20GB, the third last level’s will be size_last_level/100 = 2GB, and next level to be size_last_level/1000 = 200MB. We stop here because 200MB is within the range of the first level. In this way, we can achieve the 1.111 space amplification, without fine tuning of the level size targets. More details can be found in [code comments of the option](https://github.com/facebook/rocksdb/blob/v3.11/include/rocksdb/options.h#L366-L423) in the header file.

@ -3,6 +3,8 @@ title: GetThreadList
layout: post layout: post
author: yhciang author: yhciang
category: blog category: blog
redirect_from:
- /blog/2261/getthreadlist/
--- ---
We recently added a new API, called `GetThreadList()`, that exposes the RocksDB background thread activity. With this feature, developers will be able to obtain the real-time information about the currently running compactions and flushes such as the input / output size, elapsed time, the number of bytes it has written. Below is an example output of `GetThreadList`. To better illustrate the example, we have put a sample output of `GetThreadList` into a table where each column represents a thread status: We recently added a new API, called `GetThreadList()`, that exposes the RocksDB background thread activity. With this feature, developers will be able to obtain the real-time information about the currently running compactions and flushes such as the input / output size, elapsed time, the number of bytes it has written. Below is an example output of `GetThreadList`. To better illustrate the example, we have put a sample output of `GetThreadList` into a table where each column represents a thread status:

@ -3,6 +3,8 @@ title: Use Checkpoints for Efficient Snapshots
layout: post layout: post
author: rven2 author: rven2
category: blog category: blog
redirect_from:
- /blog/2609/use-checkpoints-for-efficient-snapshots/
--- ---
**Checkpoint** is a feature in RocksDB which provides the ability to take a snapshot of a running RocksDB database in a separate directory. Checkpoints can be used as a point in time snapshot, which can be opened Read-only to query rows as of the point in time or as a Writeable snapshot by opening it Read-Write. Checkpoints can be used for both full and incremental backups. **Checkpoint** is a feature in RocksDB which provides the ability to take a snapshot of a running RocksDB database in a separate directory. Checkpoints can be used as a point in time snapshot, which can be opened Read-only to query rows as of the point in time or as a Writeable snapshot by opening it Read-Write. Checkpoints can be used for both full and incremental backups.

@ -3,6 +3,8 @@ title: Analysis File Read Latency by Level
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
-/blog/2537/analysis-file-read-latency-by-level/
--- ---
In many use cases of RocksDB, people rely on OS page cache for caching compressed data. With this approach, verifying effective of the OS page caching is challenging, because file system is a black box to users. In many use cases of RocksDB, people rely on OS page cache for caching compressed data. With this approach, verifying effective of the OS page caching is challenging, because file system is a black box to users.

@ -3,6 +3,8 @@ title: Option of Compaction Priority
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/2921/compaction_pri/
--- ---
The most popular compaction style of RocksDB is level-based compaction, which is an improved version of LevelDB's compaction algorithm. Page 9- 16 of this [slides](https://github.com/facebook/rocksdb/blob/gh-pages/talks/2015-09-29-HPTS-Siying-RocksDB.pdf) gives an illustrated introduction of this compaction style. The basic idea that: data is organized by multiple levels with exponential increasing target size. Except a special level 0, every level is key-range partitioned into many files. When size of a level exceeds its target size, we pick one or more of its files, and merge the file into the next level. The most popular compaction style of RocksDB is level-based compaction, which is an improved version of LevelDB's compaction algorithm. Page 9- 16 of this [slides](https://github.com/facebook/rocksdb/blob/gh-pages/talks/2015-09-29-HPTS-Siying-RocksDB.pdf) gives an illustrated introduction of this compaction style. The basic idea that: data is organized by multiple levels with exponential increasing target size. Except a special level 0, every level is key-range partitioned into many files. When size of a level exceeds its target size, we pick one or more of its files, and merge the file into the next level.

@ -3,6 +3,8 @@ title: RocksDB 4.2 Release!
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/3017/rocksdb-4-2-release/
--- ---
New RocksDB release - 4.2! New RocksDB release - 4.2!

@ -3,6 +3,8 @@ title: RocksDB AMA
layout: post layout: post
author: yhchiang author: yhchiang
category: blog category: blog
redirect_from:
- /blog/3065/rocksdb-ama/
--- ---
RocksDB developers are doing a Reddit Ask-Me-Anything now at 10AM – 11AM PDT! We welcome you to stop by and ask any RocksDB related questions, including existing / upcoming features, tuning tips, or database design. RocksDB developers are doing a Reddit Ask-Me-Anything now at 10AM – 11AM PDT! We welcome you to stop by and ask any RocksDB related questions, including existing / upcoming features, tuning tips, or database design.

@ -3,6 +3,8 @@ title: RocksDB Options File
layout: post layout: post
author: yhciang author: yhciang
category: blog category: blog
redirect_from:
- /blog/3089/rocksdb-options-file/
--- ---
In RocksDB 4.3, we added a new set of features that makes managing RocksDB options easier. Specifically: In RocksDB 4.3, we added a new set of features that makes managing RocksDB options easier. Specifically:

@ -3,6 +3,8 @@ title: RocksDB 4.5.1 Released!
layout: post layout: post
author: sdong author: sdong
category: blog category: blog
redirect_from:
- /blog/3179/rocksdb-4-5-1-released/
--- ---
## 4.5.1 (3/25/2016) ## 4.5.1 (3/25/2016)

@ -3,6 +3,8 @@ title: RocksDB 4.8 Released!
layout: post layout: post
author: yiwu author: yiwu
category: blog category: blog
redirect_from:
- /blog/3239/rocksdb-4-8-released/
--- ---
## 4.8.0 (5/2/2016) ## 4.8.0 (5/2/2016)

Loading…
Cancel
Save