From 48adce77cc251031ca470212a08570477cd5f00b Mon Sep 17 00:00:00 2001 From: fyrz Date: Mon, 17 Nov 2014 23:29:52 +0100 Subject: [PATCH 1/3] [RocksJava] CompactRange support - manual range compaction support in RocksJava --- java/org/rocksdb/RocksDB.java | 291 +++++++++++++++++++++++++ java/org/rocksdb/test/RocksDBTest.java | 244 +++++++++++++++++++++ java/rocksjni/rocksjni.cc | 114 ++++++++++ 3 files changed, 649 insertions(+) diff --git a/java/org/rocksdb/RocksDB.java b/java/org/rocksdb/RocksDB.java index 3d420adea..021ed80b0 100644 --- a/java/org/rocksdb/RocksDB.java +++ b/java/org/rocksdb/RocksDB.java @@ -1251,6 +1251,287 @@ public class RocksDB extends RocksObject { columnFamilyHandle.nativeHandle_); } + /** + *

Full compaction of the underlying storage using key + * range mode.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files. + *

+ * + *

See also

+ * + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange() throws RocksDBException { + compactRange0(nativeHandle_, false, -1, 0); + } + + /** + *

Compaction of the underlying storage using key + * using key range {@code [begin, end]}.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files. + *

+ * + *

See also

+ * + * + * @param begin start of key range (included in range) + * @param end end of key range (excluded from range) + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(byte[] begin, byte[] end) + throws RocksDBException { + compactRange0(nativeHandle_, begin, begin.length, end, + end.length, false, -1, 0); + } + + /** + *

Full compaction of the underlying storage using key + * range mode.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files. + * In this case, client could set reduce_level to true, to move + * the files back to the minimum level capable of holding the data + * set or a given level (specified by non-negative target_level). + *

+ *

Compaction outputs should be placed in options.db_paths + * [target_path_id]. Behavior is undefined if target_path_id is + * out of range.

+ * + *

See also

+ * + * + * @param reduce_level reduce level after compaction + * @param target_level target level to compact to + * @param target_path_id the target path id of output path + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(boolean reduce_level, int target_level, + int target_path_id) throws RocksDBException { + compactRange0(nativeHandle_, reduce_level, + target_level, target_path_id); + } + + + /** + *

Compaction of the underlying storage using key + * using key range {@code [begin, end]}.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files. + * In this case, client could set reduce_level to true, to move + * the files back to the minimum level capable of holding the data + * set or a given level (specified by non-negative target_level). + *

+ *

Compaction outputs should be placed in options.db_paths + * [target_path_id]. Behavior is undefined if target_path_id is + * out of range.

+ * + *

See also

+ * + * + * @param begin start of key range (included in range) + * @param end end of key range (excluded from range) + * @param reduce_level reduce level after compaction + * @param target_level target level to compact to + * @param target_path_id the target path id of output path + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(byte[] begin, byte[] end, + boolean reduce_level, int target_level, int target_path_id) + throws RocksDBException { + compactRange0(nativeHandle_, begin, begin.length, end, end.length, + reduce_level, target_level, target_path_id); + } + + /** + *

Full compaction of the underlying storage of a column family + * using key range mode.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files.

+ * + *

See also

+ * + * + * @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} + * instance. + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(ColumnFamilyHandle columnFamilyHandle) + throws RocksDBException { + compactRange(nativeHandle_, false, -1, 0, + columnFamilyHandle.nativeHandle_); + } + + /** + *

Compaction of the underlying storage of a column family + * using key range {@code [begin, end]}.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files.

+ * + *

See also

+ * + * + * @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} + * instance. + * @param begin start of key range (included in range) + * @param end end of key range (excluded from range) + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(ColumnFamilyHandle columnFamilyHandle, + byte[] begin, byte[] end) throws RocksDBException { + compactRange(nativeHandle_, begin, begin.length, end, end.length, + false, -1, 0, columnFamilyHandle.nativeHandle_); + } + + /** + *

Full compaction of the underlying storage of a column family + * using key range mode.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files. + * In this case, client could set reduce_level to true, to move + * the files back to the minimum level capable of holding the data + * set or a given level (specified by non-negative target_level). + *

+ *

Compaction outputs should be placed in options.db_paths + * [target_path_id]. Behavior is undefined if target_path_id is + * out of range.

+ * + *

See also

+ * + * + * @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} + * instance. + * @param reduce_level reduce level after compaction + * @param target_level target level to compact to + * @param target_path_id the target path id of output path + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(ColumnFamilyHandle columnFamilyHandle, + boolean reduce_level, int target_level, int target_path_id) + throws RocksDBException { + compactRange(nativeHandle_, reduce_level, target_level, + target_path_id, columnFamilyHandle.nativeHandle_); + } + + /** + *

Compaction of the underlying storage of a column family + * using key range {@code [begin, end]}.

+ *

Note: After the entire database is compacted, + * all data are pushed down to the last level containing any data. + * If the total data size after compaction is reduced, that level + * might not be appropriate for hosting all the files. + * In this case, client could set reduce_level to true, to move + * the files back to the minimum level capable of holding the data + * set or a given level (specified by non-negative target_level). + *

+ *

Compaction outputs should be placed in options.db_paths + * [target_path_id]. Behavior is undefined if target_path_id is + * out of range.

+ * + *

See also

+ * + * + * @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} + * instance. + * @param begin start of key range (included in range) + * @param end end of key range (excluded from range) + * @param reduce_level reduce level after compaction + * @param target_level target level to compact to + * @param target_path_id the target path id of output path + * + * @throws RocksDBException thrown if an error occurs within the native + * part of the library. + */ + public void compactRange(ColumnFamilyHandle columnFamilyHandle, + byte[] begin, byte[] end, boolean reduce_level, int target_level, + int target_path_id) throws RocksDBException { + compactRange(nativeHandle_, begin, begin.length, end, end.length, + reduce_level, target_level, target_path_id, + columnFamilyHandle.nativeHandle_); + } + /** * Private constructor. */ @@ -1376,6 +1657,16 @@ public class RocksDB extends RocksObject { throws RocksDBException; private native void flush(long handle, long flushOptHandle, long cfHandle) throws RocksDBException; + private native void compactRange0(long handle, boolean reduce_level, int target_level, + int target_path_id) throws RocksDBException; + private native void compactRange0(long handle, byte[] begin, int beginLen, byte[] end, + int endLen, boolean reduce_level, int target_level, int target_path_id) + throws RocksDBException; + private native void compactRange(long handle, boolean reduce_level, int target_level, + int target_path_id, long cfHandle) throws RocksDBException; + private native void compactRange(long handle, byte[] begin, int beginLen, byte[] end, + int endLen, boolean reduce_level, int target_level, int target_path_id, + long cfHandle) throws RocksDBException; protected DBOptionsInterface options_; } diff --git a/java/org/rocksdb/test/RocksDBTest.java b/java/org/rocksdb/test/RocksDBTest.java index 5a8613aa1..c5e96c6aa 100644 --- a/java/org/rocksdb/test/RocksDBTest.java +++ b/java/org/rocksdb/test/RocksDBTest.java @@ -13,6 +13,7 @@ import org.rocksdb.*; import java.util.ArrayList; import java.util.List; import java.util.Map; +import java.util.Random; import static org.assertj.core.api.Assertions.assertThat; @@ -25,6 +26,9 @@ public class RocksDBTest { @Rule public TemporaryFolder dbFolder = new TemporaryFolder(); + public static final Random rand = PlatformRandomHelper. + getPlatformSpecificRandomFactory(); + @Test public void open() throws RocksDBException { RocksDB db = null; @@ -312,4 +316,244 @@ public class RocksDBTest { } } } + + @Test + public void fullCompactRange() throws RocksDBException { + RocksDB db = null; + Options opt = null; + try { + opt = new Options(). + setCreateIfMissing(true). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath()); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put((String.valueOf(i)).getBytes(), b); + } + db.compactRange(); + } finally { + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } + } + + @Test + public void fullCompactRangeColumnFamily() throws RocksDBException { + RocksDB db = null; + DBOptions opt = null; + List columnFamilyHandles = + new ArrayList<>(); + try { + opt = new DBOptions(). + setCreateIfMissing(true). + setCreateMissingColumnFamilies(true); + List columnFamilyDescriptors = + new ArrayList<>(); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + RocksDB.DEFAULT_COLUMN_FAMILY)); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + "new_cf", + new ColumnFamilyOptions(). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false))); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath(), + columnFamilyDescriptors, + columnFamilyHandles); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put(columnFamilyHandles.get(1), + String.valueOf(i).getBytes(), b); + } + db.compactRange(columnFamilyHandles.get(1)); + } finally { + for (ColumnFamilyHandle handle : columnFamilyHandles) { + handle.dispose(); + } + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } + } + + @Test + public void compactRangeWithKeys() { + + } + + @Test + public void compactRangeWithKeysColumnFamily() { + + } + + @Test + public void compactRangeToLevel() throws RocksDBException, InterruptedException { + RocksDB db = null; + Options opt = null; + try { + opt = new Options(). + setCreateIfMissing(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath()); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put((String.valueOf(i)).getBytes(), b); + } + db.flush(new FlushOptions().setWaitForFlush(true)); + db.close(); + opt.setTargetFileSizeBase(Long.MAX_VALUE). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(Long.MAX_VALUE). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(true); + + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath()); + + db.compactRange(true, 0, 0); + for (int i = 0; i < 4; i++) { + if (i == 0) { + assertThat(db.getProperty("rocksdb.num-files-at-level" + i)). + isEqualTo("1"); + } else { + assertThat(db.getProperty("rocksdb.num-files-at-level" + i)). + isEqualTo("0"); + } + } + } finally { + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } + } + + @Test + public void compactRangeToLevelColumnFamily() throws RocksDBException { + RocksDB db = null; + DBOptions opt = null; + List columnFamilyHandles = + new ArrayList<>(); + try { + opt = new DBOptions(). + setCreateIfMissing(true). + setCreateMissingColumnFamilies(true); + List columnFamilyDescriptors = + new ArrayList<>(); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + RocksDB.DEFAULT_COLUMN_FAMILY)); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + "new_cf", + new ColumnFamilyOptions(). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false))); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath(), + columnFamilyDescriptors, + columnFamilyHandles); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put(columnFamilyHandles.get(1), + String.valueOf(i).getBytes(), b); + } + db.flush(new FlushOptions().setWaitForFlush(true), + columnFamilyHandles.get(1)); + // free column families + for (ColumnFamilyHandle handle : columnFamilyHandles) { + handle.dispose(); + } + // clear column family handles for reopen + columnFamilyHandles.clear(); + db.close(); + columnFamilyDescriptors.get(1). + columnFamilyOptions(). + setTargetFileSizeBase(Long.MAX_VALUE). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(Long.MAX_VALUE). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(true); + // reopen database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath(), + columnFamilyDescriptors, + columnFamilyHandles); + // compact new column family + db.compactRange(columnFamilyHandles.get(1), true, 0, 0); + // check if new column family is compacted to level zero + for (int i = 0; i < 4; i++) { + if (i == 0) { + assertThat(db.getProperty(columnFamilyHandles.get(1), + "rocksdb.num-files-at-level" + i)). + isEqualTo("1"); + } else { + assertThat(db.getProperty(columnFamilyHandles.get(1), + "rocksdb.num-files-at-level" + i)). + isEqualTo("0"); + } + } + } finally { + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } + } } diff --git a/java/rocksjni/rocksjni.cc b/java/rocksjni/rocksjni.cc index 5af3c6b68..efcaf95ae 100644 --- a/java/rocksjni/rocksjni.cc +++ b/java/rocksjni/rocksjni.cc @@ -1379,3 +1379,117 @@ void Java_org_rocksdb_RocksDB_flush__JJJ( auto cf_handle = reinterpret_cast(jcf_handle); rocksdb_flush_helper(env, db, *flush_options, cf_handle); } + +////////////////////////////////////////////////////////////////////////////// +// rocksdb::DB::CompactRange - Full + +void rocksdb_compactrange_helper(JNIEnv* env, rocksdb::DB* db, + rocksdb::ColumnFamilyHandle* cf_handle, jboolean jreduce_level, + jint jtarget_level, jint jtarget_path_id) { + + rocksdb::Status s; + if (cf_handle != nullptr) { + s = db->CompactRange(cf_handle, nullptr, nullptr, jreduce_level, + jtarget_level, static_cast(jtarget_path_id)); + } else { + // backwards compatibility + s = db->CompactRange(nullptr, nullptr, jreduce_level, + jtarget_level, static_cast(jtarget_path_id)); + } + + if (s.ok()) { + return; + } + rocksdb::RocksDBExceptionJni::ThrowNew(env, s); +} + +/* + * Class: org_rocksdb_RocksDB + * Method: compactRange0 + * Signature: (JZII)V + */ +void Java_org_rocksdb_RocksDB_compactRange0__JZII(JNIEnv* env, + jobject jdb, jlong jdb_handle, jboolean jreduce_level, + jint jtarget_level, jint jtarget_path_id) { + auto db = reinterpret_cast(jdb_handle); + rocksdb_compactrange_helper(env, db, nullptr, jreduce_level, + jtarget_level, jtarget_path_id); +} + +/* + * Class: org_rocksdb_RocksDB + * Method: compactRange + * Signature: (JZIIJ)V + */ +void Java_org_rocksdb_RocksDB_compactRange__JZIIJ( + JNIEnv* env, jobject jdb, jlong jdb_handle, + jboolean jreduce_level, jint jtarget_level, + jint jtarget_path_id, jlong jcf_handle) { + auto db = reinterpret_cast(jdb_handle); + auto cf_handle = reinterpret_cast(jcf_handle); + rocksdb_compactrange_helper(env, db, cf_handle, jreduce_level, + jtarget_level, jtarget_path_id); +} + +////////////////////////////////////////////////////////////////////////////// +// rocksdb::DB::CompactRange - Range + +void rocksdb_compactrange_helper(JNIEnv* env, rocksdb::DB* db, + rocksdb::ColumnFamilyHandle* cf_handle, jbyteArray jbegin, jint jbegin_len, + jbyteArray jend, jint jend_len, jboolean jreduce_level, jint jtarget_level, + jint jtarget_path_id) { + + jbyte* begin = env->GetByteArrayElements(jbegin, 0); + jbyte* end = env->GetByteArrayElements(jend, 0); + const rocksdb::Slice begin_slice(reinterpret_cast(begin), jbegin_len); + const rocksdb::Slice end_slice(reinterpret_cast(end), jend_len); + + rocksdb::Status s; + if (cf_handle != nullptr) { + s = db->CompactRange(cf_handle, &begin_slice, &end_slice, jreduce_level, + jtarget_level, static_cast(jtarget_path_id)); + } else { + // backwards compatibility + s = db->CompactRange(&begin_slice, &end_slice, jreduce_level, + jtarget_level, static_cast(jtarget_path_id)); + } + + env->ReleaseByteArrayElements(jbegin, begin, JNI_ABORT); + env->ReleaseByteArrayElements(jend, end, JNI_ABORT); + + if (s.ok()) { + return; + } + rocksdb::RocksDBExceptionJni::ThrowNew(env, s); +} + +/* + * Class: org_rocksdb_RocksDB + * Method: compactRange0 + * Signature: (J[BI[BIZII)V + */ +void Java_org_rocksdb_RocksDB_compactRange0__J_3BI_3BIZII(JNIEnv* env, + jobject jdb, jlong jdb_handle, jbyteArray jbegin, jint jbegin_len, + jbyteArray jend, jint jend_len, jboolean jreduce_level, + jint jtarget_level, jint jtarget_path_id) { + auto db = reinterpret_cast(jdb_handle); + rocksdb_compactrange_helper(env, db, nullptr, jbegin, jbegin_len, + jend, jend_len, jreduce_level, jtarget_level, jtarget_path_id); +} + +/* + * Class: org_rocksdb_RocksDB + * Method: compactRange + * Signature: (JJ[BI[BIZII)V + */ +void Java_org_rocksdb_RocksDB_compactRange__J_3BI_3BIZIIJ( + JNIEnv* env, jobject jdb, jlong jdb_handle, jbyteArray jbegin, + jint jbegin_len, jbyteArray jend, jint jend_len, + jboolean jreduce_level, jint jtarget_level, + jint jtarget_path_id, jlong jcf_handle) { + auto db = reinterpret_cast(jdb_handle); + auto cf_handle = reinterpret_cast(jcf_handle); + rocksdb_compactrange_helper(env, db, cf_handle, jbegin, jbegin_len, + jend, jend_len, jreduce_level, jtarget_level, jtarget_path_id); +} + From 69188ff449826505bff95d3f4f6dd0e89cfdc1a7 Mon Sep 17 00:00:00 2001 From: fyrz Date: Thu, 20 Nov 2014 23:55:15 +0100 Subject: [PATCH 2/3] [RocksJava] CompactRange support Summary: Manual range compaction support in RocksJava. Test Plan: make rocksdbjava make jtest mvn -f rocksjni.pom package Reviewers: adamretter, yhchiang, ankgup87 Subscribers: dhruba Differential Revision: https://reviews.facebook.net/D29283 --- java/org/rocksdb/test/RocksDBTest.java | 200 +++++++++++++++++++++++-- java/rocksjni/rocksjni.cc | 1 - 2 files changed, 191 insertions(+), 10 deletions(-) diff --git a/java/org/rocksdb/test/RocksDBTest.java b/java/org/rocksdb/test/RocksDBTest.java index c5e96c6aa..df0c04787 100644 --- a/java/org/rocksdb/test/RocksDBTest.java +++ b/java/org/rocksdb/test/RocksDBTest.java @@ -355,7 +355,8 @@ public class RocksDBTest { } @Test - public void fullCompactRangeColumnFamily() throws RocksDBException { + public void fullCompactRangeColumnFamily() + throws RocksDBException { RocksDB db = null; DBOptions opt = null; List columnFamilyHandles = @@ -374,7 +375,7 @@ public class RocksDBTest { setDisableAutoCompactions(true). setCompactionStyle(CompactionStyle.LEVEL). setNumLevels(4). - setWriteBufferSize(100<<10). + setWriteBufferSize(100 << 10). setLevelZeroFileNumCompactionTrigger(3). setTargetFileSizeBase(200 << 10). setTargetFileSizeMultiplier(1). @@ -408,17 +409,195 @@ public class RocksDBTest { } @Test - public void compactRangeWithKeys() { + public void compactRangeWithKeys() + throws RocksDBException { + RocksDB db = null; + Options opt = null; + try { + opt = new Options(). + setCreateIfMissing(true). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath()); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put((String.valueOf(i)).getBytes(), b); + } + db.compactRange("0".getBytes(), "201".getBytes()); + } finally { + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } + } + @Test + public void compactRangeWithKeysReduce() + throws RocksDBException { + RocksDB db = null; + Options opt = null; + try { + opt = new Options(). + setCreateIfMissing(true). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath()); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put((String.valueOf(i)).getBytes(), b); + } + db.compactRange("0".getBytes(), "201".getBytes(), + true, 0, 0); + } finally { + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } } @Test - public void compactRangeWithKeysColumnFamily() { + public void compactRangeWithKeysColumnFamily() + throws RocksDBException { + RocksDB db = null; + DBOptions opt = null; + List columnFamilyHandles = + new ArrayList<>(); + try { + opt = new DBOptions(). + setCreateIfMissing(true). + setCreateMissingColumnFamilies(true); + List columnFamilyDescriptors = + new ArrayList<>(); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + RocksDB.DEFAULT_COLUMN_FAMILY)); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + "new_cf", + new ColumnFamilyOptions(). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false))); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath(), + columnFamilyDescriptors, + columnFamilyHandles); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put(columnFamilyHandles.get(1), + String.valueOf(i).getBytes(), b); + } + db.compactRange(columnFamilyHandles.get(1), + "0".getBytes(), "201".getBytes()); + } finally { + for (ColumnFamilyHandle handle : columnFamilyHandles) { + handle.dispose(); + } + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } + } + @Test + public void compactRangeWithKeysReduceColumnFamily() + throws RocksDBException { + RocksDB db = null; + DBOptions opt = null; + List columnFamilyHandles = + new ArrayList<>(); + try { + opt = new DBOptions(). + setCreateIfMissing(true). + setCreateMissingColumnFamilies(true); + List columnFamilyDescriptors = + new ArrayList<>(); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + RocksDB.DEFAULT_COLUMN_FAMILY)); + columnFamilyDescriptors.add(new ColumnFamilyDescriptor( + "new_cf", + new ColumnFamilyOptions(). + setDisableAutoCompactions(true). + setCompactionStyle(CompactionStyle.LEVEL). + setNumLevels(4). + setWriteBufferSize(100<<10). + setLevelZeroFileNumCompactionTrigger(3). + setTargetFileSizeBase(200 << 10). + setTargetFileSizeMultiplier(1). + setMaxBytesForLevelBase(500 << 10). + setMaxBytesForLevelMultiplier(1). + setDisableAutoCompactions(false))); + // open database + db = RocksDB.open(opt, + dbFolder.getRoot().getAbsolutePath(), + columnFamilyDescriptors, + columnFamilyHandles); + // fill database with key/value pairs + byte[] b = new byte[10000]; + for (int i = 0; i < 200; i++) { + rand.nextBytes(b); + db.put(columnFamilyHandles.get(1), + String.valueOf(i).getBytes(), b); + } + db.compactRange(columnFamilyHandles.get(1), "0".getBytes(), + "201".getBytes(), true, 0, 0); + } finally { + for (ColumnFamilyHandle handle : columnFamilyHandles) { + handle.dispose(); + } + if (db != null) { + db.close(); + } + if (opt != null) { + opt.dispose(); + } + } } @Test - public void compactRangeToLevel() throws RocksDBException, InterruptedException { + public void compactRangeToLevel() + throws RocksDBException, InterruptedException { RocksDB db = null; Options opt = null; try { @@ -456,10 +635,12 @@ public class RocksDBTest { db.compactRange(true, 0, 0); for (int i = 0; i < 4; i++) { if (i == 0) { - assertThat(db.getProperty("rocksdb.num-files-at-level" + i)). + assertThat( + db.getProperty("rocksdb.num-files-at-level" + i)). isEqualTo("1"); } else { - assertThat(db.getProperty("rocksdb.num-files-at-level" + i)). + assertThat( + db.getProperty("rocksdb.num-files-at-level" + i)). isEqualTo("0"); } } @@ -474,7 +655,8 @@ public class RocksDBTest { } @Test - public void compactRangeToLevelColumnFamily() throws RocksDBException { + public void compactRangeToLevelColumnFamily() + throws RocksDBException { RocksDB db = null; DBOptions opt = null; List columnFamilyHandles = @@ -493,7 +675,7 @@ public class RocksDBTest { setDisableAutoCompactions(true). setCompactionStyle(CompactionStyle.LEVEL). setNumLevels(4). - setWriteBufferSize(100<<10). + setWriteBufferSize(100 << 10). setLevelZeroFileNumCompactionTrigger(3). setTargetFileSizeBase(200 << 10). setTargetFileSizeMultiplier(1). diff --git a/java/rocksjni/rocksjni.cc b/java/rocksjni/rocksjni.cc index efcaf95ae..57a20e487 100644 --- a/java/rocksjni/rocksjni.cc +++ b/java/rocksjni/rocksjni.cc @@ -1492,4 +1492,3 @@ void Java_org_rocksdb_RocksDB_compactRange__J_3BI_3BIZIIJ( rocksdb_compactrange_helper(env, db, cf_handle, jbegin, jbegin_len, jend, jend_len, jreduce_level, jtarget_level, jtarget_path_id); } - From efc94ceb27d5e12b25734679730302ff6cfc55be Mon Sep 17 00:00:00 2001 From: fyrz Date: Sun, 7 Dec 2014 22:19:46 +0100 Subject: [PATCH 3/3] [RocksJava] Incorporated changes for D29283 --- java/org/rocksdb/RocksDB.java | 102 ++++++++++++---------------------- 1 file changed, 36 insertions(+), 66 deletions(-) diff --git a/java/org/rocksdb/RocksDB.java b/java/org/rocksdb/RocksDB.java index 021ed80b0..04a93eacd 100644 --- a/java/org/rocksdb/RocksDB.java +++ b/java/org/rocksdb/RocksDB.java @@ -1252,13 +1252,10 @@ public class RocksDB extends RocksObject { } /** - *

Full compaction of the underlying storage using key - * range mode.

- *

Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files. - *

+ *

Range compaction of database.

+ *

Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

* *

See also

*
    @@ -1275,13 +1272,10 @@ public class RocksDB extends RocksObject { } /** - *

    Compaction of the underlying storage using key - * using key range {@code [begin, end]}.

    - *

    Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files. - *

    + *

    Range compaction of database.

    + *

    Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

    * *

    See also

    *
      @@ -1303,16 +1297,11 @@ public class RocksDB extends RocksObject { } /** - *

      Full compaction of the underlying storage using key - * range mode.

      - *

      Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files. - * In this case, client could set reduce_level to true, to move - * the files back to the minimum level capable of holding the data - * set or a given level (specified by non-negative target_level). - *

      + *

      Range compaction of database.

      + *

      Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

      + * *

      Compaction outputs should be placed in options.db_paths * [target_path_id]. Behavior is undefined if target_path_id is * out of range.

      @@ -1339,16 +1328,11 @@ public class RocksDB extends RocksObject { /** - *

      Compaction of the underlying storage using key - * using key range {@code [begin, end]}.

      - *

      Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files. - * In this case, client could set reduce_level to true, to move - * the files back to the minimum level capable of holding the data - * set or a given level (specified by non-negative target_level). - *

      + *

      Range compaction of database.

      + *

      Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

      + * *

      Compaction outputs should be placed in options.db_paths * [target_path_id]. Behavior is undefined if target_path_id is * out of range.

      @@ -1377,12 +1361,10 @@ public class RocksDB extends RocksObject { } /** - *

      Full compaction of the underlying storage of a column family - * using key range mode.

      - *

      Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files.

      + *

      Range compaction of column family.

      + *

      Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

      * *

      See also

      *
        @@ -1411,12 +1393,10 @@ public class RocksDB extends RocksObject { } /** - *

        Compaction of the underlying storage of a column family - * using key range {@code [begin, end]}.

        - *

        Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files.

        + *

        Range compaction of column family.

        + *

        Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

        * *

        See also

        *
          @@ -1445,16 +1425,11 @@ public class RocksDB extends RocksObject { } /** - *

          Full compaction of the underlying storage of a column family - * using key range mode.

          - *

          Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files. - * In this case, client could set reduce_level to true, to move - * the files back to the minimum level capable of holding the data - * set or a given level (specified by non-negative target_level). - *

          + *

          Range compaction of column family.

          + *

          Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

          + * *

          Compaction outputs should be placed in options.db_paths * [target_path_id]. Behavior is undefined if target_path_id is * out of range.

          @@ -1488,16 +1463,11 @@ public class RocksDB extends RocksObject { } /** - *

          Compaction of the underlying storage of a column family - * using key range {@code [begin, end]}.

          - *

          Note: After the entire database is compacted, - * all data are pushed down to the last level containing any data. - * If the total data size after compaction is reduced, that level - * might not be appropriate for hosting all the files. - * In this case, client could set reduce_level to true, to move - * the files back to the minimum level capable of holding the data - * set or a given level (specified by non-negative target_level). - *

          + *

          Range compaction of column family.

          + *

          Note: After the database has been compacted, + * all data will have been pushed down to the last level containing + * any data.

          + * *

          Compaction outputs should be placed in options.db_paths * [target_path_id]. Behavior is undefined if target_path_id is * out of range.