New-style blob option bindings, Java option getter and improve/fix option parsing (#8999)

Summary:
Implementation of https://github.com/facebook/rocksdb/issues/8221, plus/including extension of Java options API to allow the get() of options from RocksDB. The extension allows more comprehensive testing of options at the Java side, by validating that the options are set at the C++ side.

Variations on methods:
MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder getOptions()
MutableDBOptions.MutableDBOptionsBuilder getDBOptions()

retrieve the options via RocksDB C++ interfaces, and parse the resulting string into one of the Java-style option objects.

This necessitated generalising the parsing of option strings in Java, which now parses the full range of option strings returned by the C++ interface, rather than a useful subset. This necessitates the list-separator being changed to :(colon) from , (comma).

Pull Request resolved: https://github.com/facebook/rocksdb/pull/8999

Reviewed By: jay-zhuang

Differential Revision: D31655487

Pulled By: ltamasi

fbshipit-source-id: c38e98145c81c61dc38238b0df580db176ce4efd
main
Alan Paxton 3 years ago committed by Facebook GitHub Bot
parent ad5325a736
commit 8d615a2b1d
  1. 1
      HISTORY.md
  2. 1
      java/CMakeLists.txt
  3. 3
      java/Makefile
  4. 295
      java/rocksjni/options.cc
  5. 52
      java/rocksjni/rocksjni.cc
  6. 148
      java/src/main/java/org/rocksdb/AbstractMutableOptions.java
  7. 181
      java/src/main/java/org/rocksdb/AdvancedMutableColumnFamilyOptionsInterface.java
  8. 259
      java/src/main/java/org/rocksdb/ColumnFamilyOptions.java
  9. 44
      java/src/main/java/org/rocksdb/CompressionType.java
  10. 129
      java/src/main/java/org/rocksdb/MutableColumnFamilyOptions.java
  11. 34
      java/src/main/java/org/rocksdb/MutableDBOptions.java
  12. 2
      java/src/main/java/org/rocksdb/MutableOptionValue.java
  13. 256
      java/src/main/java/org/rocksdb/OptionString.java
  14. 97
      java/src/main/java/org/rocksdb/Options.java
  15. 46
      java/src/main/java/org/rocksdb/RocksDB.java
  16. 301
      java/src/test/java/org/rocksdb/BlobOptionsTest.java
  17. 93
      java/src/test/java/org/rocksdb/MutableColumnFamilyOptionsTest.java
  18. 385
      java/src/test/java/org/rocksdb/MutableOptionsGetSetTest.java
  19. 79
      java/understanding_options.md

@ -11,6 +11,7 @@
* Make `DB::close()` thread-safe.
### New Features
* Add Java API bindings for new integrated BlobDB options
* Print information about blob files when using "ldb list_live_files_metadata"
* Provided support for SingleDelete with user defined timestamp.
* Experimental new function DB::GetLiveFilesStorageInfo offers essentially a unified version of other functions like GetLiveFiles, GetLiveFilesChecksumInfo, and GetSortedWalFiles. Checkpoints and backups could show small behavioral changes and/or improved performance as they now use this new API.

@ -190,6 +190,7 @@ set(JAVA_MAIN_CLASSES
src/main/java/org/rocksdb/OptimisticTransactionDB.java
src/main/java/org/rocksdb/OptimisticTransactionOptions.java
src/main/java/org/rocksdb/Options.java
src/main/java/org/rocksdb/OptionString.java
src/main/java/org/rocksdb/OptionsUtil.java
src/main/java/org/rocksdb/PersistentCache.java
src/main/java/org/rocksdb/PlainTableConfig.java

@ -50,6 +50,7 @@ NATIVE_JAVA_CLASSES = \
org.rocksdb.OptimisticTransactionDB\
org.rocksdb.OptimisticTransactionOptions\
org.rocksdb.Options\
org.rocksdb.OptionsString\
org.rocksdb.OptionsUtil\
org.rocksdb.PersistentCache\
org.rocksdb.PlainTableConfig\
@ -108,6 +109,7 @@ SHA256_CMD ?= sha256sum
JAVA_TESTS = \
org.rocksdb.BackupableDBOptionsTest\
org.rocksdb.BackupEngineTest\
org.rocksdb.BlobOptionsTest\
org.rocksdb.BlockBasedTableConfigTest\
org.rocksdb.BuiltinComparatorTest\
org.rocksdb.util.BytewiseComparatorTest\
@ -151,6 +153,7 @@ JAVA_TESTS = \
org.rocksdb.MixedOptionsTest\
org.rocksdb.MutableColumnFamilyOptionsTest\
org.rocksdb.MutableDBOptionsTest\
org.rocksdb.MutableOptionsGetSetTest \
org.rocksdb.NativeComparatorWrapperTest\
org.rocksdb.NativeLibraryLoaderTest\
org.rocksdb.OptimisticTransactionTest\

@ -3697,6 +3697,146 @@ jboolean Java_org_rocksdb_Options_forceConsistencyChecks(
return static_cast<bool>(opts->force_consistency_checks);
}
/// BLOB options
/*
* Class: org_rocksdb_Options
* Method: setEnableBlobFiles
* Signature: (JZ)V
*/
void Java_org_rocksdb_Options_setEnableBlobFiles(JNIEnv*, jobject,
jlong jhandle,
jboolean jenable_blob_files) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
opts->enable_blob_files = static_cast<bool>(jenable_blob_files);
}
/*
* Class: org_rocksdb_Options
* Method: enableBlobFiles
* Signature: (J)Z
*/
jboolean Java_org_rocksdb_Options_enableBlobFiles(JNIEnv*, jobject,
jlong jhandle) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
return static_cast<jboolean>(opts->enable_blob_files);
}
/*
* Class: org_rocksdb_Options
* Method: setMinBlobSize
* Signature: (JJ)V
*/
void Java_org_rocksdb_Options_setMinBlobSize(JNIEnv*, jobject, jlong jhandle,
jlong jmin_blob_size) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
opts->min_blob_size = static_cast<uint64_t>(jmin_blob_size);
}
/*
* Class: org_rocksdb_Options
* Method: minBlobSize
* Signature: (J)J
*/
jlong Java_org_rocksdb_Options_minBlobSize(JNIEnv*, jobject, jlong jhandle) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
return static_cast<jlong>(opts->min_blob_size);
}
/*
* Class: org_rocksdb_Options
* Method: setMinBlobSize
* Signature: (JJ)V
*/
void Java_org_rocksdb_Options_setBlobFileSize(JNIEnv*, jobject, jlong jhandle,
jlong jblob_file_size) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
opts->blob_file_size = static_cast<uint64_t>(jblob_file_size);
}
/*
* Class: org_rocksdb_Options
* Method: minBlobSize
* Signature: (J)J
*/
jlong Java_org_rocksdb_Options_blobFileSize(JNIEnv*, jobject, jlong jhandle) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
return static_cast<jlong>(opts->blob_file_size);
}
/*
* Class: org_rocksdb_Options
* Method: setBlobCompressionType
* Signature: (JB)V
*/
void Java_org_rocksdb_Options_setBlobCompressionType(
JNIEnv*, jobject, jlong jhandle, jbyte jblob_compression_type_value) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
opts->blob_compression_type =
ROCKSDB_NAMESPACE::CompressionTypeJni::toCppCompressionType(
jblob_compression_type_value);
}
/*
* Class: org_rocksdb_Options
* Method: blobCompressionType
* Signature: (J)B
*/
jbyte Java_org_rocksdb_Options_blobCompressionType(JNIEnv*, jobject,
jlong jhandle) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
return ROCKSDB_NAMESPACE::CompressionTypeJni::toJavaCompressionType(
opts->blob_compression_type);
}
/*
* Class: org_rocksdb_Options
* Method: setEnableBlobGarbageCollection
* Signature: (JZ)V
*/
void Java_org_rocksdb_Options_setEnableBlobGarbageCollection(
JNIEnv*, jobject, jlong jhandle, jboolean jenable_blob_garbage_collection) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
opts->enable_blob_garbage_collection =
static_cast<bool>(jenable_blob_garbage_collection);
}
/*
* Class: org_rocksdb_Options
* Method: enableBlobGarbageCollection
* Signature: (J)Z
*/
jboolean Java_org_rocksdb_Options_enableBlobGarbageCollection(JNIEnv*, jobject,
jlong jhandle) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
return static_cast<jboolean>(opts->enable_blob_garbage_collection);
}
/*
* Class: org_rocksdb_Options
* Method: setBlobGarbageCollectionAgeCutoff
* Signature: (JD)V
*/
void Java_org_rocksdb_Options_setBlobGarbageCollectionAgeCutoff(
JNIEnv*, jobject, jlong jhandle,
jdouble jblob_garbage_collection_age_cutoff) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
opts->blob_garbage_collection_age_cutoff =
static_cast<double>(jblob_garbage_collection_age_cutoff);
}
/*
* Class: org_rocksdb_Options
* Method: blobGarbageCollectionAgeCutoff
* Signature: (J)D
*/
jdouble Java_org_rocksdb_Options_blobGarbageCollectionAgeCutoff(JNIEnv*,
jobject,
jlong jhandle) {
auto* opts = reinterpret_cast<ROCKSDB_NAMESPACE::Options*>(jhandle);
return static_cast<jdouble>(opts->blob_garbage_collection_age_cutoff);
}
//////////////////////////////////////////////////////////////////////////////
// ROCKSDB_NAMESPACE::ColumnFamilyOptions
@ -5249,7 +5389,160 @@ jboolean Java_org_rocksdb_ColumnFamilyOptions_forceConsistencyChecks(
JNIEnv*, jobject, jlong jhandle) {
auto* cf_opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return static_cast<bool>(cf_opts->force_consistency_checks);
return static_cast<jboolean>(cf_opts->force_consistency_checks);
}
/// BLOB options
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: setEnableBlobFiles
* Signature: (JZ)V
*/
void Java_org_rocksdb_ColumnFamilyOptions_setEnableBlobFiles(
JNIEnv*, jobject, jlong jhandle, jboolean jenable_blob_files) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
opts->enable_blob_files = static_cast<bool>(jenable_blob_files);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: enableBlobFiles
* Signature: (J)Z
*/
jboolean Java_org_rocksdb_ColumnFamilyOptions_enableBlobFiles(JNIEnv*, jobject,
jlong jhandle) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return static_cast<jboolean>(opts->enable_blob_files);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: setMinBlobSize
* Signature: (JJ)V
*/
void Java_org_rocksdb_ColumnFamilyOptions_setMinBlobSize(JNIEnv*, jobject,
jlong jhandle,
jlong jmin_blob_size) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
opts->min_blob_size = static_cast<uint64_t>(jmin_blob_size);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: minBlobSize
* Signature: (J)J
*/
jlong Java_org_rocksdb_ColumnFamilyOptions_minBlobSize(JNIEnv*, jobject,
jlong jhandle) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return static_cast<jlong>(opts->min_blob_size);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: setMinBlobSize
* Signature: (JJ)V
*/
void Java_org_rocksdb_ColumnFamilyOptions_setBlobFileSize(
JNIEnv*, jobject, jlong jhandle, jlong jblob_file_size) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
opts->blob_file_size = static_cast<uint64_t>(jblob_file_size);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: minBlobSize
* Signature: (J)J
*/
jlong Java_org_rocksdb_ColumnFamilyOptions_blobFileSize(JNIEnv*, jobject,
jlong jhandle) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return static_cast<jlong>(opts->blob_file_size);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: setBlobCompressionType
* Signature: (JB)V
*/
void Java_org_rocksdb_ColumnFamilyOptions_setBlobCompressionType(
JNIEnv*, jobject, jlong jhandle, jbyte jblob_compression_type_value) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
opts->blob_compression_type =
ROCKSDB_NAMESPACE::CompressionTypeJni::toCppCompressionType(
jblob_compression_type_value);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: blobCompressionType
* Signature: (J)B
*/
jbyte Java_org_rocksdb_ColumnFamilyOptions_blobCompressionType(JNIEnv*, jobject,
jlong jhandle) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return ROCKSDB_NAMESPACE::CompressionTypeJni::toJavaCompressionType(
opts->blob_compression_type);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: setEnableBlobGarbageCollection
* Signature: (JZ)V
*/
void Java_org_rocksdb_ColumnFamilyOptions_setEnableBlobGarbageCollection(
JNIEnv*, jobject, jlong jhandle, jboolean jenable_blob_garbage_collection) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
opts->enable_blob_garbage_collection =
static_cast<bool>(jenable_blob_garbage_collection);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: enableBlobGarbageCollection
* Signature: (J)Z
*/
jboolean Java_org_rocksdb_ColumnFamilyOptions_enableBlobGarbageCollection(
JNIEnv*, jobject, jlong jhandle) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return static_cast<jboolean>(opts->enable_blob_garbage_collection);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: setBlobGarbageCollectionAgeCutoff
* Signature: (JD)V
*/
void Java_org_rocksdb_ColumnFamilyOptions_setBlobGarbageCollectionAgeCutoff(
JNIEnv*, jobject, jlong jhandle,
jdouble jblob_garbage_collection_age_cutoff) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
opts->blob_garbage_collection_age_cutoff =
static_cast<double>(jblob_garbage_collection_age_cutoff);
}
/*
* Class: org_rocksdb_ColumnFamilyOptions
* Method: blobGarbageCollectionAgeCutoff
* Signature: (J)D
*/
jdouble Java_org_rocksdb_ColumnFamilyOptions_blobGarbageCollectionAgeCutoff(
JNIEnv*, jobject, jlong jhandle) {
auto* opts =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyOptions*>(jhandle);
return static_cast<jdouble>(opts->blob_garbage_collection_age_cutoff);
}
/////////////////////////////////////////////////////////////////////

@ -2709,6 +2709,9 @@ void Java_org_rocksdb_RocksDB_setOptions(
auto* db = reinterpret_cast<ROCKSDB_NAMESPACE::DB*>(jdb_handle);
auto* cf_handle =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyHandle*>(jcf_handle);
if (cf_handle == nullptr) {
cf_handle = db->DefaultColumnFamily();
}
auto s = db->SetOptions(cf_handle, options_map);
if (!s.ok()) {
ROCKSDB_NAMESPACE::RocksDBExceptionJni::ThrowNew(env, s);
@ -2773,6 +2776,55 @@ void Java_org_rocksdb_RocksDB_setDBOptions(
}
}
/*
* Class: org_rocksdb_RocksDB
* Method: getOptions
* Signature: (JJ)Ljava/lang/String;
*/
jstring Java_org_rocksdb_RocksDB_getOptions(JNIEnv* env, jobject,
jlong jdb_handle,
jlong jcf_handle) {
auto* db = reinterpret_cast<ROCKSDB_NAMESPACE::DB*>(jdb_handle);
ROCKSDB_NAMESPACE::ColumnFamilyHandle* cf_handle;
if (jcf_handle == 0) {
cf_handle = db->DefaultColumnFamily();
} else {
cf_handle =
reinterpret_cast<ROCKSDB_NAMESPACE::ColumnFamilyHandle*>(jcf_handle);
}
auto options = db->GetOptions(cf_handle);
std::string options_as_string;
ROCKSDB_NAMESPACE::Status s =
GetStringFromColumnFamilyOptions(&options_as_string, options);
if (!s.ok()) {
ROCKSDB_NAMESPACE::RocksDBExceptionJni::ThrowNew(env, s);
return nullptr;
}
return env->NewStringUTF(options_as_string.c_str());
}
/*
* Class: org_rocksdb_RocksDB
* Method: getDBOptions
* Signature: (J)Ljava/lang/String;
*/
jstring Java_org_rocksdb_RocksDB_getDBOptions(JNIEnv* env, jobject,
jlong jdb_handle) {
auto* db = reinterpret_cast<ROCKSDB_NAMESPACE::DB*>(jdb_handle);
auto options = db->GetDBOptions();
std::string options_as_string;
ROCKSDB_NAMESPACE::Status s =
GetStringFromDBOptions(&options_as_string, options);
if (!s.ok()) {
ROCKSDB_NAMESPACE::RocksDBExceptionJni::ThrowNew(env, s);
return nullptr;
}
return env->NewStringUTF(options_as_string.c_str());
}
/*
* Class: org_rocksdb_RocksDB
* Method: compactFiles

@ -7,7 +7,7 @@ public abstract class AbstractMutableOptions {
protected static final String KEY_VALUE_PAIR_SEPARATOR = ";";
protected static final char KEY_VALUE_SEPARATOR = '=';
static final String INT_ARRAY_INT_SEPARATOR = ",";
static final String INT_ARRAY_INT_SEPARATOR = ":";
protected final String[] keys;
private final String[] values;
@ -59,6 +59,7 @@ public abstract class AbstractMutableOptions {
K extends MutableOptionKey> {
private final Map<K, MutableOptionValue<?>> options = new LinkedHashMap<>();
private final List<OptionString.Entry> unknown = new ArrayList<>();
protected abstract U self();
@ -213,44 +214,149 @@ public abstract class AbstractMutableOptions {
return ((MutableOptionValue.MutableOptionEnumValue<N>) value).asObject();
}
public U fromString(
final String keyStr, final String valueStr)
/**
* Parse a string into a long value, accepting values expressed as a double (such as 9.00) which
* are meant to be a long, not a double
*
* @param value the string containing a value which represents a long
* @return the long value of the parsed string
*/
private long parseAsLong(final String value) {
try {
return Long.parseLong(value);
} catch (NumberFormatException nfe) {
final double doubleValue = Double.parseDouble(value);
if (doubleValue != Math.round(doubleValue))
throw new IllegalArgumentException("Unable to parse or round " + value + " to int");
return Math.round(doubleValue);
}
}
/**
* Parse a string into an int value, accepting values expressed as a double (such as 9.00) which
* are meant to be an int, not a double
*
* @param value the string containing a value which represents an int
* @return the int value of the parsed string
*/
private int parseAsInt(final String value) {
try {
return Integer.parseInt(value);
} catch (NumberFormatException nfe) {
final double doubleValue = Double.parseDouble(value);
if (doubleValue != Math.round(doubleValue))
throw new IllegalArgumentException("Unable to parse or round " + value + " to long");
return (int) Math.round(doubleValue);
}
}
/**
* Constructs a builder for mutable column family options from a hierarchical parsed options
* string representation. The {@link OptionString.Parser} class output has been used to create a
* (name,value)-list; each value may be either a simple string or a (name, value)-list in turn.
*
* @param options a list of parsed option string objects
* @param ignoreUnknown what to do if the key is not one of the keys we expect
*
* @return a builder with the values from the parsed input set
*
* @throws IllegalArgumentException if an option value is of the wrong type, or a key is empty
*/
protected U fromParsed(final List<OptionString.Entry> options, final boolean ignoreUnknown) {
Objects.requireNonNull(options);
for (final OptionString.Entry option : options) {
try {
if (option.key.isEmpty()) {
throw new IllegalArgumentException("options string is invalid: " + option);
}
fromOptionString(option, ignoreUnknown);
} catch (NumberFormatException nfe) {
throw new IllegalArgumentException(
"" + option.key + "=" + option.value + " - not a valid value for its type", nfe);
}
}
return self();
}
/**
* Set a value in the builder from the supplied option string
*
* @param option the option key/value to add to this builder
* @param ignoreUnknown if this is not set, throw an exception when a key is not in the known
* set
* @return the same object, after adding options
* @throws IllegalArgumentException if the key is unkown, or a value has the wrong type/form
*/
private U fromOptionString(final OptionString.Entry option, final boolean ignoreUnknown)
throws IllegalArgumentException {
Objects.requireNonNull(keyStr);
Objects.requireNonNull(valueStr);
Objects.requireNonNull(option.key);
Objects.requireNonNull(option.value);
final K key = allKeys().get(option.key);
if (key == null && ignoreUnknown) {
unknown.add(option);
return self();
} else if (key == null) {
throw new IllegalArgumentException("Key: " + key + " is not a known option key");
}
if (!option.value.isList()) {
throw new IllegalArgumentException(
"Option: " + key + " is not a simple value or list, don't know how to parse it");
}
// Check that simple values are the single item in the array
if (key.getValueType() != MutableOptionKey.ValueType.INT_ARRAY) {
{
if (option.value.list.size() != 1) {
throw new IllegalArgumentException(
"Simple value does not have exactly 1 item: " + option.value.list);
}
}
}
final List<String> valueStrs = option.value.list;
final String valueStr = valueStrs.get(0);
final K key = allKeys().get(keyStr);
switch(key.getValueType()) {
switch (key.getValueType()) {
case DOUBLE:
return setDouble(key, Double.parseDouble(valueStr));
case LONG:
return setLong(key, Long.parseLong(valueStr));
return setLong(key, parseAsLong(valueStr));
case INT:
return setInt(key, Integer.parseInt(valueStr));
return setInt(key, parseAsInt(valueStr));
case BOOLEAN:
return setBoolean(key, Boolean.parseBoolean(valueStr));
case INT_ARRAY:
final String[] strInts = valueStr
.trim().split(INT_ARRAY_INT_SEPARATOR);
if(strInts == null || strInts.length == 0) {
throw new IllegalArgumentException(
"int array value is not correctly formatted");
final int[] value = new int[valueStrs.size()];
for (int i = 0; i < valueStrs.size(); i++) {
value[i] = Integer.parseInt(valueStrs.get(i));
}
return setIntArray(key, value);
final int value[] = new int[strInts.length];
int i = 0;
for(final String strInt : strInts) {
value[i++] = Integer.parseInt(strInt);
case ENUM:
final Optional<CompressionType> compressionType =
CompressionType.getFromInternal(valueStr);
if (compressionType.isPresent()) {
return setEnum(key, compressionType.get());
}
return setIntArray(key, value);
}
throw new IllegalStateException(
key + " has unknown value type: " + key.getValueType());
throw new IllegalStateException(key + " has unknown value type: " + key.getValueType());
}
/**
*
* @return the list of keys encountered which were not known to the type being generated
*/
public List<OptionString.Entry> getUnknown() {
return new ArrayList<>(unknown);
}
}
}

@ -509,4 +509,185 @@ public interface AdvancedMutableColumnFamilyOptionsInterface<
* @return the periodic compaction in seconds.
*/
long periodicCompactionSeconds();
//
// BEGIN options for blobs (integrated BlobDB)
//
/**
* When set, large values (blobs) are written to separate blob files, and only
* pointers to them are stored in SST files. This can reduce write amplification
* for large-value use cases at the cost of introducing a level of indirection
* for reads. See also the options min_blob_size, blob_file_size,
* blob_compression_type, enable_blob_garbage_collection, and
* blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param enableBlobFiles true iff blob files should be enabled
*
* @return the reference to the current options.
*/
T setEnableBlobFiles(final boolean enableBlobFiles);
/**
* When set, large values (blobs) are written to separate blob files, and only
* pointers to them are stored in SST files. This can reduce write amplification
* for large-value use cases at the cost of introducing a level of indirection
* for reads. See also the options min_blob_size, blob_file_size,
* blob_compression_type, enable_blob_garbage_collection, and
* blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @return true iff blob files are enabled
*/
boolean enableBlobFiles();
/**
* Set the size of the smallest value to be stored separately in a blob file. Values
* which have an uncompressed size smaller than this threshold are stored
* alongside the keys in SST files in the usual fashion. A value of zero for
* this option means that all values are stored in blob files. Note that
* enable_blob_files has to be set in order for this option to have any effect.
*
* Default: 0
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param minBlobSize the size of the smallest value to be stored separately in a blob file
* @return the reference to the current options.
*/
T setMinBlobSize(final long minBlobSize);
/**
* Get the size of the smallest value to be stored separately in a blob file. Values
* which have an uncompressed size smaller than this threshold are stored
* alongside the keys in SST files in the usual fashion. A value of zero for
* this option means that all values are stored in blob files. Note that
* enable_blob_files has to be set in order for this option to have any effect.
*
* Default: 0
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @return the current minimum size of value which is stored separately in a blob
*/
long minBlobSize();
/**
* Set the size limit for blob files. When writing blob files, a new file is opened
* once this limit is reached. Note that enable_blob_files has to be set in
* order for this option to have any effect.
*
* Default: 256 MB
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param blobFileSize the size limit for blob files
*
* @return the reference to the current options.
*/
T setBlobFileSize(final long blobFileSize);
/**
* The size limit for blob files. When writing blob files, a new file is opened
* once this limit is reached.
*
* @return the current size limit for blob files
*/
long blobFileSize();
/**
* Set the compression algorithm to use for large values stored in blob files. Note
* that enable_blob_files has to be set in order for this option to have any
* effect.
*
* Default: no compression
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param compressionType the compression algorithm to use.
*
* @return the reference to the current options.
*/
T setBlobCompressionType(CompressionType compressionType);
/**
* Get the compression algorithm in use for large values stored in blob files.
* Note that enable_blob_files has to be set in order for this option to have any
* effect.
*
* @return the current compression algorithm
*/
CompressionType blobCompressionType();
/**
* Enable/disable garbage collection of blobs. Blob GC is performed as part of
* compaction. Valid blobs residing in blob files older than a cutoff get
* relocated to new files as they are encountered during compaction, which makes
* it possible to clean up blob files once they contain nothing but
* obsolete/garbage blobs. See also blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* @param enableBlobGarbageCollection the new enabled/disabled state of blob garbage collection
*
* @return the reference to the current options.
*/
T setEnableBlobGarbageCollection(final boolean enableBlobGarbageCollection);
/**
* Query whether garbage collection of blobs is enabled.Blob GC is performed as part of
* compaction. Valid blobs residing in blob files older than a cutoff get
* relocated to new files as they are encountered during compaction, which makes
* it possible to clean up blob files once they contain nothing but
* obsolete/garbage blobs. See also blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* @return true iff blob garbage collection is currently enabled.
*/
boolean enableBlobGarbageCollection();
/**
* Set cutoff in terms of blob file age for garbage collection. Blobs in the
* oldest N blob files will be relocated when encountered during compaction,
* where N = garbage_collection_cutoff * number_of_blob_files. Note that
* enable_blob_garbage_collection has to be set in order for this option to have
* any effect.
*
* Default: 0.25
*
* @param blobGarbageCollectionAgeCutoff the new age cutoff
*
* @return the reference to the current options.
*/
T setBlobGarbageCollectionAgeCutoff(double blobGarbageCollectionAgeCutoff);
/**
* Get cutoff in terms of blob file age for garbage collection. Blobs in the
* oldest N blob files will be relocated when encountered during compaction,
* where N = garbage_collection_cutoff * number_of_blob_files. Note that
* enable_blob_garbage_collection has to be set in order for this option to have
* any effect.
*
* Default: 0.25
*
* @return the current age cutoff for garbage collection
*/
double blobGarbageCollectionAgeCutoff();
//
// END options for blobs (integrated BlobDB)
//
}

@ -612,8 +612,8 @@ public class ColumnFamilyOptions extends RocksObject
assert (isOwningHandle());
final int len = cfPaths.size();
final String paths[] = new String[len];
final long targetSizes[] = new long[len];
final String[] paths = new String[len];
final long[] targetSizes = new long[len];
int i = 0;
for (final DbPath dbPath : cfPaths) {
@ -633,8 +633,8 @@ public class ColumnFamilyOptions extends RocksObject
return Collections.emptyList();
}
final String paths[] = new String[len];
final long targetSizes[] = new long[len];
final String[] paths = new String[len];
final long[] targetSizes = new long[len];
cfPaths(nativeHandle_, paths, targetSizes);
@ -932,6 +932,242 @@ public class ColumnFamilyOptions extends RocksObject
return sstPartitionerFactory_;
}
//
// BEGIN options for blobs (integrated BlobDB)
//
/**
* When set, large values (blobs) are written to separate blob files, and only
* pointers to them are stored in SST files. This can reduce write amplification
* for large-value use cases at the cost of introducing a level of indirection
* for reads. See also the options min_blob_size, blob_file_size,
* blob_compression_type, enable_blob_garbage_collection, and
* blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param enableBlobFiles true iff blob files should be enabled
*
* @return the reference to the current options.
*/
@Override
public ColumnFamilyOptions setEnableBlobFiles(final boolean enableBlobFiles) {
setEnableBlobFiles(nativeHandle_, enableBlobFiles);
return this;
}
/**
* When set, large values (blobs) are written to separate blob files, and only
* pointers to them are stored in SST files. This can reduce write amplification
* for large-value use cases at the cost of introducing a level of indirection
* for reads. See also the options min_blob_size, blob_file_size,
* blob_compression_type, enable_blob_garbage_collection, and
* blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @return true iff blob files are currently enabled
*/
public boolean enableBlobFiles() {
return enableBlobFiles(nativeHandle_);
}
/**
* Set the size of the smallest value to be stored separately in a blob file. Values
* which have an uncompressed size smaller than this threshold are stored
* alongside the keys in SST files in the usual fashion. A value of zero for
* this option means that all values are stored in blob files. Note that
* enable_blob_files has to be set in order for this option to have any effect.
*
* Default: 0
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param minBlobSize the size of the smallest value to be stored separately in a blob file
* @return these options, updated with the supplied minimum blob size value
*/
@Override
public ColumnFamilyOptions setMinBlobSize(final long minBlobSize) {
setMinBlobSize(nativeHandle_, minBlobSize);
return this;
}
/**
* Get the size of the smallest value to be stored separately in a blob file. Values
* which have an uncompressed size smaller than this threshold are stored
* alongside the keys in SST files in the usual fashion. A value of zero for
* this option means that all values are stored in blob files. Note that
* enable_blob_files has to be set in order for this option to have any effect.
*
* Default: 0
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @return the current minimum blob size
*/
@Override
public long minBlobSize() {
return minBlobSize(nativeHandle_);
}
/**
* Set the size limit for blob files. When writing blob files, a new file is opened
* once this limit is reached. Note that enable_blob_files has to be set in
* order for this option to have any effect.
*
* Default: 256 MB
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param blobFileSize the new size limit for blob files
*
* @return the reference to the current options.
*/
@Override
public ColumnFamilyOptions setBlobFileSize(final long blobFileSize) {
setBlobFileSize(nativeHandle_, blobFileSize);
return this;
}
/**
* Get the size limit for blob files. When writing blob files, a new file is opened
* once this limit is reached. Note that enable_blob_files has to be set in
* order for this option to have any effect.
*
* Default: 256 MB
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @return the size limit for blob files
*/
@Override
public long blobFileSize() {
return blobFileSize(nativeHandle_);
}
/**
* Set the compression algorithm to use for large values stored in blob files. Note
* that enable_blob_files has to be set in order for this option to have any
* effect.
*
* Default: no compression
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @param compressionType the compression algorithm to use
*
* @return the reference to the current options.
*/
@Override
public ColumnFamilyOptions setBlobCompressionType(final CompressionType compressionType) {
setBlobCompressionType(nativeHandle_, compressionType.getValue());
return this;
}
/**
* Get the compression algorithm to use for large values stored in blob files. Note
* that enable_blob_files has to be set in order for this option to have any
* effect.
*
* Default: no compression
*
* Dynamically changeable through
* {@link RocksDB#setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions)}.
*
* @return the compression algorithm currently in use for blobs
*/
@Override
public CompressionType blobCompressionType() {
return CompressionType.values()[blobCompressionType(nativeHandle_)];
}
/**
* Enable/disable garbage collection of blobs. Blob GC is performed as part of
* compaction. Valid blobs residing in blob files older than a cutoff get
* relocated to new files as they are encountered during compaction, which makes
* it possible to clean up blob files once they contain nothing but
* obsolete/garbage blobs. See also blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* @param enableBlobGarbageCollection true iff blob garbage collection is to be enabled
*
* @return the reference to the current options.
*/
@Override
public ColumnFamilyOptions setEnableBlobGarbageCollection(
final boolean enableBlobGarbageCollection) {
setEnableBlobGarbageCollection(nativeHandle_, enableBlobGarbageCollection);
return this;
}
/**
* Get enabled/disables state for garbage collection of blobs. Blob GC is performed as part of
* compaction. Valid blobs residing in blob files older than a cutoff get
* relocated to new files as they are encountered during compaction, which makes
* it possible to clean up blob files once they contain nothing but
* obsolete/garbage blobs. See also blob_garbage_collection_age_cutoff below.
*
* Default: false
*
* @return true iff blob garbage collection is currently enabled
*/
@Override
public boolean enableBlobGarbageCollection() {
return enableBlobGarbageCollection(nativeHandle_);
}
/**
* Set the cutoff in terms of blob file age for garbage collection. Blobs in the
* oldest N blob files will be relocated when encountered during compaction,
* where N = garbage_collection_cutoff * number_of_blob_files. Note that
* enable_blob_garbage_collection has to be set in order for this option to have
* any effect.
*
* Default: 0.25
*
* @param blobGarbageCollectionAgeCutoff the new blob garbage collection age cutoff
*
* @return the reference to the current options.
*/
@Override
public ColumnFamilyOptions setBlobGarbageCollectionAgeCutoff(
final double blobGarbageCollectionAgeCutoff) {
setBlobGarbageCollectionAgeCutoff(nativeHandle_, blobGarbageCollectionAgeCutoff);
return this;
}
/**
* Get the cutoff in terms of blob file age for garbage collection. Blobs in the
* oldest N blob files will be relocated when encountered during compaction,
* where N = garbage_collection_cutoff * number_of_blob_files. Note that
* enable_blob_garbage_collection has to be set in order for this option to have
* any effect.
*
* Default: 0.25
*
* @return the current blob garbage collection age cutoff
*/
@Override
public double blobGarbageCollectionAgeCutoff() {
return blobGarbageCollectionAgeCutoff(nativeHandle_);
}
//
// END options for blobs (integrated BlobDB)
//
private static native long getColumnFamilyOptionsFromProps(
final long cfgHandle, String optString);
private static native long getColumnFamilyOptionsFromProps(final String optString);
@ -1108,6 +1344,21 @@ public class ColumnFamilyOptions extends RocksObject
private static native void setCompactionThreadLimiter(
final long nativeHandle_, final long compactionThreadLimiterHandle);
private native void setEnableBlobFiles(final long nativeHandle_, final boolean enableBlobFiles);
private native boolean enableBlobFiles(final long nativeHandle_);
private native void setMinBlobSize(final long nativeHandle_, final long minBlobSize);
private native long minBlobSize(final long nativeHandle_);
private native void setBlobFileSize(final long nativeHandle_, final long blobFileSize);
private native long blobFileSize(final long nativeHandle_);
private native void setBlobCompressionType(final long nativeHandle_, final byte compressionType);
private native byte blobCompressionType(final long nativeHandle_);
private native void setEnableBlobGarbageCollection(
final long nativeHandle_, final boolean enableBlobGarbageCollection);
private native boolean enableBlobGarbageCollection(final long nativeHandle_);
private native void setBlobGarbageCollectionAgeCutoff(
final long nativeHandle_, final double blobGarbageCollectionAgeCutoff);
private native double blobGarbageCollectionAgeCutoff(final long nativeHandle_);
// instance variables
// NOTE: If you add new member variables, please update the copy constructor above!
private MemTableConfig memTableConfig_;

@ -5,6 +5,8 @@
package org.rocksdb;
import java.util.Optional;
/**
* Enum CompressionType
*
@ -14,16 +16,15 @@ package org.rocksdb;
* compression method (if any) is used to compress a block.</p>
*/
public enum CompressionType {
NO_COMPRESSION((byte) 0x0, null),
SNAPPY_COMPRESSION((byte) 0x1, "snappy"),
ZLIB_COMPRESSION((byte) 0x2, "z"),
BZLIB2_COMPRESSION((byte) 0x3, "bzip2"),
LZ4_COMPRESSION((byte) 0x4, "lz4"),
LZ4HC_COMPRESSION((byte) 0x5, "lz4hc"),
XPRESS_COMPRESSION((byte) 0x6, "xpress"),
ZSTD_COMPRESSION((byte)0x7, "zstd"),
DISABLE_COMPRESSION_OPTION((byte)0x7F, null);
NO_COMPRESSION((byte) 0x0, null, "kNoCompression"),
SNAPPY_COMPRESSION((byte) 0x1, "snappy", "kSnappyCompression"),
ZLIB_COMPRESSION((byte) 0x2, "z", "kZlibCompression"),
BZLIB2_COMPRESSION((byte) 0x3, "bzip2", "kBZip2Compression"),
LZ4_COMPRESSION((byte) 0x4, "lz4", "kLZ4Compression"),
LZ4HC_COMPRESSION((byte) 0x5, "lz4hc", "kLZ4HCCompression"),
XPRESS_COMPRESSION((byte) 0x6, "xpress", "kXpressCompression"),
ZSTD_COMPRESSION((byte) 0x7, "zstd", "kZSTD"),
DISABLE_COMPRESSION_OPTION((byte) 0x7F, null, "kDisableCompressionOption");
/**
* <p>Get the CompressionType enumeration value by
@ -70,6 +71,25 @@ public enum CompressionType {
"Illegal value provided for CompressionType.");
}
/**
* <p>Get a CompressionType value based on the string key in the C++ options output.
* This gets used in support of getting options into Java from an options string,
* which is generated at the C++ level.
* </p>
*
* @param internalName the internal (C++) name by which the option is known.
*
* @return CompressionType instance (optional)
*/
public static Optional<CompressionType> getFromInternal(final String internalName) {
for (final CompressionType compressionType : CompressionType.values()) {
if (compressionType.internalName_.equals(internalName)) {
return Optional.of(compressionType);
}
}
return Optional.empty();
}
/**
* <p>Returns the byte value of the enumerations value.</p>
*
@ -89,11 +109,13 @@ public enum CompressionType {
return libraryName_;
}
CompressionType(final byte value, final String libraryName) {
CompressionType(final byte value, final String libraryName, final String internalName) {
value_ = value;
libraryName_ = libraryName;
internalName_ = internalName;
}
private final byte value_;
private final String libraryName_;
private final String internalName_;
}

@ -39,42 +39,24 @@ public class MutableColumnFamilyOptions
*
* The format is: key1=value1;key2=value2;key3=value3 etc
*
* For int[] values, each int should be separated by a comma, e.g.
* For int[] values, each int should be separated by a colon, e.g.
*
* key1=value1;intArrayKey1=1,2,3
* key1=value1;intArrayKey1=1:2:3
*
* @param str The string representation of the mutable column family options
*
* @return A builder for the mutable column family options
*/
public static MutableColumnFamilyOptionsBuilder parse(final String str) {
public static MutableColumnFamilyOptionsBuilder parse(
final String str, final boolean ignoreUnknown) {
Objects.requireNonNull(str);
final MutableColumnFamilyOptionsBuilder builder =
new MutableColumnFamilyOptionsBuilder();
final String[] options = str.trim().split(KEY_VALUE_PAIR_SEPARATOR);
for(final String option : options) {
final int equalsOffset = option.indexOf(KEY_VALUE_SEPARATOR);
if(equalsOffset <= 0) {
throw new IllegalArgumentException(
"options string has an invalid key=value pair");
}
final String key = option.substring(0, equalsOffset);
if(key.isEmpty()) {
throw new IllegalArgumentException("options string is invalid");
}
final String value = option.substring(equalsOffset + 1);
if(value.isEmpty()) {
throw new IllegalArgumentException("options string is invalid");
}
builder.fromString(key, value);
}
final List<OptionString.Entry> parsedOptions = OptionString.Parser.parse(str);
return new MutableColumnFamilyOptionsBuilder().fromParsed(parsedOptions, ignoreUnknown);
}
return builder;
public static MutableColumnFamilyOptionsBuilder parse(final String str) {
return parse(str, false);
}
private interface MutableColumnFamilyOptionKey extends MutableOptionKey {}
@ -131,11 +113,30 @@ public class MutableColumnFamilyOptions
}
}
public enum BlobOption implements MutableColumnFamilyOptionKey {
enable_blob_files(ValueType.BOOLEAN),
min_blob_size(ValueType.LONG),
blob_file_size(ValueType.LONG),
blob_compression_type(ValueType.ENUM),
enable_blob_garbage_collection(ValueType.BOOLEAN),
blob_garbage_collection_age_cutoff(ValueType.DOUBLE);
private final ValueType valueType;
BlobOption(final ValueType valueType) {
this.valueType = valueType;
}
@Override
public ValueType getValueType() {
return valueType;
}
}
public enum MiscOption implements MutableColumnFamilyOptionKey {
max_sequential_skip_in_iterations(ValueType.LONG),
paranoid_file_checks(ValueType.BOOLEAN),
report_bg_io_stats(ValueType.BOOLEAN),
compression_type(ValueType.ENUM);
compression(ValueType.ENUM);
private final ValueType valueType;
MiscOption(final ValueType valueType) {
@ -165,6 +166,10 @@ public class MutableColumnFamilyOptions
for(final MutableColumnFamilyOptionKey key : MiscOption.values()) {
ALL_KEYS_LOOKUP.put(key.name(), key);
}
for (final MutableColumnFamilyOptionKey key : BlobOption.values()) {
ALL_KEYS_LOOKUP.put(key.name(), key);
}
}
private MutableColumnFamilyOptionsBuilder() {
@ -438,12 +443,12 @@ public class MutableColumnFamilyOptions
@Override
public MutableColumnFamilyOptionsBuilder setCompressionType(
final CompressionType compressionType) {
return setEnum(MiscOption.compression_type, compressionType);
return setEnum(MiscOption.compression, compressionType);
}
@Override
public CompressionType compressionType() {
return (CompressionType)getEnum(MiscOption.compression_type);
return (CompressionType) getEnum(MiscOption.compression);
}
@Override
@ -477,5 +482,69 @@ public class MutableColumnFamilyOptions
public long periodicCompactionSeconds() {
return getLong(CompactionOption.periodic_compaction_seconds);
}
@Override
public MutableColumnFamilyOptionsBuilder setEnableBlobFiles(final boolean enableBlobFiles) {
return setBoolean(BlobOption.enable_blob_files, enableBlobFiles);
}
@Override
public boolean enableBlobFiles() {
return getBoolean(BlobOption.enable_blob_files);
}
@Override
public MutableColumnFamilyOptionsBuilder setMinBlobSize(final long minBlobSize) {
return setLong(BlobOption.min_blob_size, minBlobSize);
}
@Override
public long minBlobSize() {
return getLong(BlobOption.min_blob_size);
}
@Override
public MutableColumnFamilyOptionsBuilder setBlobFileSize(final long blobFileSize) {
return setLong(BlobOption.blob_file_size, blobFileSize);
}
@Override
public long blobFileSize() {
return getLong(BlobOption.blob_file_size);
}
@Override
public MutableColumnFamilyOptionsBuilder setBlobCompressionType(
final CompressionType compressionType) {
return setEnum(BlobOption.blob_compression_type, compressionType);
}
@Override
public CompressionType blobCompressionType() {
return (CompressionType) getEnum(BlobOption.blob_compression_type);
}
@Override
public MutableColumnFamilyOptionsBuilder setEnableBlobGarbageCollection(
final boolean enableBlobGarbageCollection) {
return setBoolean(BlobOption.enable_blob_garbage_collection, enableBlobGarbageCollection);
}
@Override
public boolean enableBlobGarbageCollection() {
return getBoolean(BlobOption.enable_blob_garbage_collection);
}
@Override
public MutableColumnFamilyOptionsBuilder setBlobGarbageCollectionAgeCutoff(
final double blobGarbageCollectionAgeCutoff) {
return setDouble(
BlobOption.blob_garbage_collection_age_cutoff, blobGarbageCollectionAgeCutoff);
}
@Override
public double blobGarbageCollectionAgeCutoff() {
return getDouble(BlobOption.blob_garbage_collection_age_cutoff);
}
}
}

@ -6,6 +6,7 @@
package org.rocksdb;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
@ -41,40 +42,21 @@ public class MutableDBOptions extends AbstractMutableOptions {
*
* For int[] values, each int should be separated by a comma, e.g.
*
* key1=value1;intArrayKey1=1,2,3
* key1=value1;intArrayKey1=1:2:3
*
* @param str The string representation of the mutable db options
*
* @return A builder for the mutable db options
*/
public static MutableDBOptionsBuilder parse(final String str) {
public static MutableDBOptionsBuilder parse(final String str, boolean ignoreUnknown) {
Objects.requireNonNull(str);
final MutableDBOptionsBuilder builder =
new MutableDBOptionsBuilder();
final String[] options = str.trim().split(KEY_VALUE_PAIR_SEPARATOR);
for(final String option : options) {
final int equalsOffset = option.indexOf(KEY_VALUE_SEPARATOR);
if(equalsOffset <= 0) {
throw new IllegalArgumentException(
"options string has an invalid key=value pair");
}
final String key = option.substring(0, equalsOffset);
if(key.isEmpty()) {
throw new IllegalArgumentException("options string is invalid");
}
final String value = option.substring(equalsOffset + 1);
if(value.isEmpty()) {
throw new IllegalArgumentException("options string is invalid");
}
builder.fromString(key, value);
}
final List<OptionString.Entry> parsedOptions = OptionString.Parser.parse(str);
return new MutableDBOptions.MutableDBOptionsBuilder().fromParsed(parsedOptions, ignoreUnknown);
}
return builder;
public static MutableDBOptionsBuilder parse(final String str) {
return parse(str, false);
}
private interface MutableDBOptionKey extends MutableOptionKey {}

@ -326,7 +326,7 @@ public abstract class MutableOptionValue<T> {
String asString() {
final StringBuilder builder = new StringBuilder();
for(int i = 0; i < value.length; i++) {
builder.append(i);
builder.append(value[i]);
if(i + 1 < value.length) {
builder.append(INT_ARRAY_INT_SEPARATOR);
}

@ -0,0 +1,256 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
package org.rocksdb;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
public class OptionString {
private final static char kvPairSeparator = ';';
private final static char kvSeparator = '=';
private final static char complexValueBegin = '{';
private final static char complexValueEnd = '}';
private final static char wrappedValueBegin = '{';
private final static char wrappedValueEnd = '}';
private final static char arrayValueSeparator = ':';
static class Value {
final List<String> list;
final List<Entry> complex;
public Value(final List<String> list, final List<Entry> complex) {
this.list = list;
this.complex = complex;
}
public boolean isList() {
return (this.list != null && this.complex == null);
}
public static Value fromList(final List<String> list) {
return new Value(list, null);
}
public static Value fromComplex(final List<Entry> complex) {
return new Value(null, complex);
}
public String toString() {
final StringBuilder sb = new StringBuilder();
if (isList()) {
for (final String item : list) {
sb.append(item).append(arrayValueSeparator);
}
// remove the final separator
if (sb.length() > 0)
sb.delete(sb.length() - 1, sb.length());
} else {
sb.append('[');
for (final Entry entry : complex) {
sb.append(entry.toString()).append(';');
}
sb.append(']');
}
return sb.toString();
}
}
static class Entry {
public final String key;
public final Value value;
private Entry(final String key, final Value value) {
this.key = key;
this.value = value;
}
public String toString() {
return "" + key + "=" + value;
}
}
static class Parser {
static class Exception extends RuntimeException {
public Exception(final String s) {
super(s);
}
}
final String str;
final StringBuilder sb;
private Parser(final String str) {
this.str = str;
this.sb = new StringBuilder(str);
}
private void exception(final String message) {
final int pos = str.length() - sb.length();
final int before = Math.min(pos, 64);
final int after = Math.min(64, str.length() - pos);
final String here =
str.substring(pos - before, pos) + "__*HERE*__" + str.substring(pos, pos + after);
throw new Parser.Exception(message + " at [" + here + "]");
}
private void skipWhite() {
while (sb.length() > 0 && Character.isWhitespace(sb.charAt(0))) {
sb.delete(0, 1);
}
}
private char first() {
if (sb.length() == 0)
exception("Unexpected end of input");
return sb.charAt(0);
}
private char next() {
if (sb.length() == 0)
exception("Unexpected end of input");
final char c = sb.charAt(0);
sb.delete(0, 1);
return c;
}
private boolean hasNext() {
return (sb.length() > 0);
}
private boolean is(final char c) {
return (sb.length() > 0 && sb.charAt(0) == c);
}
private boolean isKeyChar() {
if (!hasNext())
return false;
final char c = first();
return (Character.isAlphabetic(c) || Character.isDigit(c) || "_".indexOf(c) != -1);
}
private boolean isValueChar() {
if (!hasNext())
return false;
final char c = first();
return (Character.isAlphabetic(c) || Character.isDigit(c) || "_-+.[]".indexOf(c) != -1);
}
private String parseKey() {
final StringBuilder sbKey = new StringBuilder();
sbKey.append(next());
while (isKeyChar()) {
sbKey.append(next());
}
return sbKey.toString();
}
private String parseSimpleValue() {
if (is(wrappedValueBegin)) {
next();
final String result = parseSimpleValue();
if (!is(wrappedValueEnd)) {
exception("Expected to end a wrapped value with " + wrappedValueEnd);
}
next();
return result;
} else {
final StringBuilder sbValue = new StringBuilder();
while (isValueChar()) sbValue.append(next());
return sbValue.toString();
}
}
private List<String> parseList() {
final List<String> list = new ArrayList<>(1);
while (true) {
list.add(parseSimpleValue());
if (!is(arrayValueSeparator))
break;
next();
}
return list;
}
private Entry parseOption() {
skipWhite();
if (!isKeyChar()) {
exception("No valid key character(s) for key in key=value ");
}
final String key = parseKey();
skipWhite();
if (is(kvSeparator)) {
next();
} else {
exception("Expected = separating key and value");
}
skipWhite();
final Value value = parseValue();
return new Entry(key, value);
}
private Value parseValue() {
skipWhite();
if (is(complexValueBegin)) {
next();
skipWhite();
final Value value = Value.fromComplex(parseComplex());
skipWhite();
if (is(complexValueEnd)) {
next();
skipWhite();
} else {
exception("Expected } ending complex value");
}
return value;
} else if (isValueChar()) {
return Value.fromList(parseList());
}
exception("No valid value character(s) for value in key=value");
return null;
}
private List<Entry> parseComplex() {
final List<Entry> entries = new ArrayList<>();
skipWhite();
if (hasNext()) {
entries.add(parseOption());
skipWhite();
while (is(kvPairSeparator)) {
next();
skipWhite();
if (!isKeyChar()) {
// the separator was a terminator
break;
}
entries.add(parseOption());
skipWhite();
}
}
return entries;
}
public static List<Entry> parse(final String str) {
Objects.requireNonNull(str);
final Parser parser = new Parser(str);
final List<Entry> result = parser.parseComplex();
if (parser.hasNext()) {
parser.exception("Unexpected end of parsing ");
}
return result;
}
}
}

@ -1338,8 +1338,8 @@ public class Options extends RocksObject
assert (isOwningHandle());
final int len = cfPaths.size();
final String paths[] = new String[len];
final long targetSizes[] = new long[len];
final String[] paths = new String[len];
final long[] targetSizes = new long[len];
int i = 0;
for (final DbPath dbPath : cfPaths) {
@ -1359,8 +1359,8 @@ public class Options extends RocksObject
return Collections.emptyList();
}
final String paths[] = new String[len];
final long targetSizes[] = new long[len];
final String[] paths = new String[len];
final long[] targetSizes = new long[len];
cfPaths(nativeHandle_, paths, targetSizes);
@ -2017,6 +2017,80 @@ public class Options extends RocksObject
return this.compactionThreadLimiter_;
}
//
// BEGIN options for blobs (integrated BlobDB)
//
@Override
public Options setEnableBlobFiles(final boolean enableBlobFiles) {
setEnableBlobFiles(nativeHandle_, enableBlobFiles);
return this;
}
@Override
public boolean enableBlobFiles() {
return enableBlobFiles(nativeHandle_);
}
@Override
public Options setMinBlobSize(final long minBlobSize) {
setMinBlobSize(nativeHandle_, minBlobSize);
return this;
}
@Override
public long minBlobSize() {
return minBlobSize(nativeHandle_);
}
@Override
public Options setBlobFileSize(final long blobFileSize) {
setBlobFileSize(nativeHandle_, blobFileSize);
return this;
}
@Override
public long blobFileSize() {
return blobFileSize(nativeHandle_);
}
@Override
public Options setBlobCompressionType(CompressionType compressionType) {
setBlobCompressionType(nativeHandle_, compressionType.getValue());
return this;
}
@Override
public CompressionType blobCompressionType() {
return CompressionType.values()[blobCompressionType(nativeHandle_)];
}
@Override
public Options setEnableBlobGarbageCollection(final boolean enableBlobGarbageCollection) {
setEnableBlobGarbageCollection(nativeHandle_, enableBlobGarbageCollection);
return this;
}
@Override
public boolean enableBlobGarbageCollection() {
return enableBlobGarbageCollection(nativeHandle_);
}
@Override
public Options setBlobGarbageCollectionAgeCutoff(double blobGarbageCollectionAgeCutoff) {
setBlobGarbageCollectionAgeCutoff(nativeHandle_, blobGarbageCollectionAgeCutoff);
return this;
}
@Override
public double blobGarbageCollectionAgeCutoff() {
return blobGarbageCollectionAgeCutoff(nativeHandle_);
}
//
// END options for blobs (integrated BlobDB)
//
private native static long newOptions();
private native static long newOptions(long dbOptHandle,
long cfOptHandle);
@ -2431,6 +2505,21 @@ public class Options extends RocksObject
final long handle, final long bgerrorResumeRetryInterval);
private static native long bgerrorResumeRetryInterval(final long handle);
private native void setEnableBlobFiles(final long nativeHandle_, final boolean enableBlobFiles);
private native boolean enableBlobFiles(final long nativeHandle_);
private native void setMinBlobSize(final long nativeHandle_, final long minBlobSize);
private native long minBlobSize(final long nativeHandle_);
private native void setBlobFileSize(final long nativeHandle_, final long blobFileSize);
private native long blobFileSize(final long nativeHandle_);
private native void setBlobCompressionType(final long nativeHandle_, final byte compressionType);
private native byte blobCompressionType(final long nativeHandle_);
private native void setEnableBlobGarbageCollection(
final long nativeHandle_, final boolean enableBlobGarbageCollection);
private native boolean enableBlobGarbageCollection(final long nativeHandle_);
private native void setBlobGarbageCollectionAgeCutoff(
final long nativeHandle_, double blobGarbageCollectionAgeCutoff);
private native double blobGarbageCollectionAgeCutoff(final long nativeHandle_);
// instance variables
// NOTE: If you add new member variables, please update the copy constructor above!
private Env env_;

@ -3746,6 +3746,50 @@ public class RocksDB extends RocksObject {
mutableColumnFamilyOptions.getKeys(), mutableColumnFamilyOptions.getValues());
}
/**
* Get the options for the column family handle
*
* @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle}
* instance, or null for the default column family.
*
* @return the options parsed from the options string return by RocksDB
*
* @throws RocksDBException if an error occurs while getting the options string, or parsing the
* resulting options string into options
*/
public MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder getOptions(
/* @Nullable */ final ColumnFamilyHandle columnFamilyHandle) throws RocksDBException {
String optionsString = getOptions(
nativeHandle_, columnFamilyHandle == null ? 0 : columnFamilyHandle.nativeHandle_);
return MutableColumnFamilyOptions.parse(optionsString, true);
}
/**
* Default column family options
*
* @return the options parsed from the options string return by RocksDB
*
* @throws RocksDBException if an error occurs while getting the options string, or parsing the
* resulting options string into options
*/
public MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder getOptions()
throws RocksDBException {
return getOptions(null);
}
/**
* Get the database options
*
* @return the DB options parsed from the options string return by RocksDB
*
* @throws RocksDBException if an error occurs while getting the options string, or parsing the
* resulting options string into options
*/
public MutableDBOptions.MutableDBOptionsBuilder getDBOptions() throws RocksDBException {
String optionsString = getDBOptions(nativeHandle_);
return MutableDBOptions.parse(optionsString, true);
}
/**
* Change the options for the default column family handle.
*
@ -4853,8 +4897,10 @@ public class RocksDB extends RocksObject {
throws RocksDBException;
private native void setOptions(final long handle, final long cfHandle,
final String[] keys, final String[] values) throws RocksDBException;
private native String getOptions(final long handle, final long cfHandle);
private native void setDBOptions(final long handle,
final String[] keys, final String[] values) throws RocksDBException;
private native String getDBOptions(final long handle);
private native String[] compactFiles(final long handle,
final long compactionOptionsHandle,
final long columnFamilyHandle,

@ -0,0 +1,301 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
package org.rocksdb;
import static java.nio.charset.StandardCharsets.UTF_8;
import static org.assertj.core.api.Assertions.assertThat;
import java.io.File;
import java.io.FilenameFilter;
import java.util.*;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
public class BlobOptionsTest {
@ClassRule
public static final RocksNativeLibraryResource ROCKS_NATIVE_LIBRARY_RESOURCE =
new RocksNativeLibraryResource();
@Rule public TemporaryFolder dbFolder = new TemporaryFolder();
final int minBlobSize = 65536;
final int largeBlobSize = 65536 * 2;
/**
* Count the files in the temporary folder which end with a particular suffix
* Used to query the state of a test database to check if it is as the test expects
*
* @param endsWith the suffix to match
* @return the number of files with a matching suffix
*/
@SuppressWarnings("CallToStringConcatCanBeReplacedByOperator")
private int countDBFiles(final String endsWith) {
return Objects
.requireNonNull(dbFolder.getRoot().list(new FilenameFilter() {
@Override
public boolean accept(File dir, String name) {
return name.endsWith(endsWith);
}
}))
.length;
}
@SuppressWarnings("SameParameterValue")
private byte[] small_key(String suffix) {
return ("small_key_" + suffix).getBytes(UTF_8);
}
@SuppressWarnings("SameParameterValue")
private byte[] small_value(String suffix) {
return ("small_value_" + suffix).getBytes(UTF_8);
}
private byte[] large_key(String suffix) {
return ("large_key_" + suffix).getBytes(UTF_8);
}
private byte[] large_value(String repeat) {
final byte[] large_value = ("" + repeat + "_" + largeBlobSize + "b").getBytes(UTF_8);
final byte[] large_buffer = new byte[largeBlobSize];
for (int pos = 0; pos < largeBlobSize; pos += large_value.length) {
int numBytes = Math.min(large_value.length, large_buffer.length - pos);
System.arraycopy(large_value, 0, large_buffer, pos, numBytes);
}
return large_buffer;
}
@Test
public void blobOptions() {
try (final Options options = new Options()) {
assertThat(options.enableBlobFiles()).isEqualTo(false);
assertThat(options.minBlobSize()).isEqualTo(0);
assertThat(options.blobCompressionType()).isEqualTo(CompressionType.NO_COMPRESSION);
assertThat(options.enableBlobGarbageCollection()).isEqualTo(false);
assertThat(options.blobFileSize()).isEqualTo(268435456L);
assertThat(options.blobGarbageCollectionAgeCutoff()).isEqualTo(0.25);
assertThat(options.setEnableBlobFiles(true)).isEqualTo(options);
assertThat(options.setMinBlobSize(132768L)).isEqualTo(options);
assertThat(options.setBlobCompressionType(CompressionType.BZLIB2_COMPRESSION))
.isEqualTo(options);
assertThat(options.setEnableBlobGarbageCollection(true)).isEqualTo(options);
assertThat(options.setBlobFileSize(132768L)).isEqualTo(options);
assertThat(options.setBlobGarbageCollectionAgeCutoff(0.89)).isEqualTo(options);
assertThat(options.enableBlobFiles()).isEqualTo(true);
assertThat(options.minBlobSize()).isEqualTo(132768L);
assertThat(options.blobCompressionType()).isEqualTo(CompressionType.BZLIB2_COMPRESSION);
assertThat(options.enableBlobGarbageCollection()).isEqualTo(true);
assertThat(options.blobFileSize()).isEqualTo(132768L);
assertThat(options.blobGarbageCollectionAgeCutoff()).isEqualTo(0.89);
}
}
@Test
public void blobColumnFamilyOptions() {
try (final ColumnFamilyOptions columnFamilyOptions = new ColumnFamilyOptions()) {
assertThat(columnFamilyOptions.enableBlobFiles()).isEqualTo(false);
assertThat(columnFamilyOptions.minBlobSize()).isEqualTo(0);
assertThat(columnFamilyOptions.blobCompressionType())
.isEqualTo(CompressionType.NO_COMPRESSION);
assertThat(columnFamilyOptions.enableBlobGarbageCollection()).isEqualTo(false);
assertThat(columnFamilyOptions.blobFileSize()).isEqualTo(268435456L);
assertThat(columnFamilyOptions.blobGarbageCollectionAgeCutoff()).isEqualTo(0.25);
assertThat(columnFamilyOptions.setEnableBlobFiles(true)).isEqualTo(columnFamilyOptions);
assertThat(columnFamilyOptions.setMinBlobSize(132768L)).isEqualTo(columnFamilyOptions);
assertThat(columnFamilyOptions.setBlobCompressionType(CompressionType.BZLIB2_COMPRESSION))
.isEqualTo(columnFamilyOptions);
assertThat(columnFamilyOptions.setEnableBlobGarbageCollection(true))
.isEqualTo(columnFamilyOptions);
assertThat(columnFamilyOptions.setBlobFileSize(132768L)).isEqualTo(columnFamilyOptions);
assertThat(columnFamilyOptions.setBlobGarbageCollectionAgeCutoff(0.89))
.isEqualTo(columnFamilyOptions);
assertThat(columnFamilyOptions.enableBlobFiles()).isEqualTo(true);
assertThat(columnFamilyOptions.minBlobSize()).isEqualTo(132768L);
assertThat(columnFamilyOptions.blobCompressionType())
.isEqualTo(CompressionType.BZLIB2_COMPRESSION);
assertThat(columnFamilyOptions.enableBlobGarbageCollection()).isEqualTo(true);
assertThat(columnFamilyOptions.blobFileSize()).isEqualTo(132768L);
assertThat(columnFamilyOptions.blobGarbageCollectionAgeCutoff()).isEqualTo(0.89);
}
}
@Test
public void blobMutableColumnFamilyOptionsBuilder() {
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder =
MutableColumnFamilyOptions.builder();
builder.setEnableBlobFiles(true)
.setMinBlobSize(1024)
.setBlobCompressionType(CompressionType.BZLIB2_COMPRESSION)
.setEnableBlobGarbageCollection(true)
.setBlobGarbageCollectionAgeCutoff(0.89)
.setBlobFileSize(132768);
assertThat(builder.enableBlobFiles()).isEqualTo(true);
assertThat(builder.minBlobSize()).isEqualTo(1024);
assertThat(builder.blobCompressionType()).isEqualTo(CompressionType.BZLIB2_COMPRESSION);
assertThat(builder.enableBlobGarbageCollection()).isEqualTo(true);
assertThat(builder.blobGarbageCollectionAgeCutoff()).isEqualTo(0.89);
assertThat(builder.blobFileSize()).isEqualTo(132768);
builder.setEnableBlobFiles(false)
.setMinBlobSize(4096)
.setBlobCompressionType(CompressionType.LZ4_COMPRESSION)
.setEnableBlobGarbageCollection(false)
.setBlobGarbageCollectionAgeCutoff(0.91)
.setBlobFileSize(2048);
assertThat(builder.enableBlobFiles()).isEqualTo(false);
assertThat(builder.minBlobSize()).isEqualTo(4096);
assertThat(builder.blobCompressionType()).isEqualTo(CompressionType.LZ4_COMPRESSION);
assertThat(builder.enableBlobGarbageCollection()).isEqualTo(false);
assertThat(builder.blobGarbageCollectionAgeCutoff()).isEqualTo(0.91);
assertThat(builder.blobFileSize()).isEqualTo(2048);
final MutableColumnFamilyOptions options = builder.build();
assertThat(options.getKeys())
.isEqualTo(new String[] {"enable_blob_files", "min_blob_size", "blob_compression_type",
"enable_blob_garbage_collection", "blob_garbage_collection_age_cutoff",
"blob_file_size"});
assertThat(options.getValues())
.isEqualTo(new String[] {"false", "4096", "LZ4_COMPRESSION", "false", "0.91", "2048"});
}
/**
* Configure the default column family with BLOBs.
* Confirm that BLOBs are generated when appropriately-sized writes are flushed.
*
* @throws RocksDBException if a db access throws an exception
*/
@Test
public void testBlobWriteAboveThreshold() throws RocksDBException {
try (final Options options = new Options()
.setCreateIfMissing(true)
.setMinBlobSize(minBlobSize)
.setEnableBlobFiles(true);
final RocksDB db = RocksDB.open(options, dbFolder.getRoot().getAbsolutePath())) {
db.put(small_key("default"), small_value("default"));
db.flush(new FlushOptions().setWaitForFlush(true));
// check there are no blobs in the database
assertThat(countDBFiles(".sst")).isEqualTo(1);
assertThat(countDBFiles(".blob")).isEqualTo(0);
db.put(large_key("default"), large_value("default"));
db.flush(new FlushOptions().setWaitForFlush(true));
// wrote and flushed a value larger than the blobbing threshold
// check there is a single blob in the database
assertThat(countDBFiles(".sst")).isEqualTo(2);
assertThat(countDBFiles(".blob")).isEqualTo(1);
assertThat(db.get(small_key("default"))).isEqualTo(small_value("default"));
assertThat(db.get(large_key("default"))).isEqualTo(large_value("default"));
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder fetchOptions =
db.getOptions(null);
assertThat(fetchOptions.minBlobSize()).isEqualTo(minBlobSize);
assertThat(fetchOptions.enableBlobFiles()).isEqualTo(true);
assertThat(fetchOptions.writeBufferSize()).isEqualTo(64 << 20);
}
}
/**
* Configure 2 column families respectively with and without BLOBs.
* Confirm that BLOB files are generated (once the DB is flushed) only for the appropriate column
* family.
*
* @throws RocksDBException if a db access throws an exception
*/
@Test
public void testBlobWriteAboveThresholdCF() throws RocksDBException {
final ColumnFamilyOptions columnFamilyOptions0 = new ColumnFamilyOptions();
final ColumnFamilyDescriptor columnFamilyDescriptor0 =
new ColumnFamilyDescriptor("default".getBytes(UTF_8), columnFamilyOptions0);
List<ColumnFamilyDescriptor> columnFamilyDescriptors =
Collections.singletonList(columnFamilyDescriptor0);
List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
try (final DBOptions dbOptions = new DBOptions().setCreateIfMissing(true);
final RocksDB db = RocksDB.open(dbOptions, dbFolder.getRoot().getAbsolutePath(),
columnFamilyDescriptors, columnFamilyHandles)) {
db.put(columnFamilyHandles.get(0), small_key("default"), small_value("default"));
db.flush(new FlushOptions().setWaitForFlush(true));
assertThat(countDBFiles(".blob")).isEqualTo(0);
try (final ColumnFamilyOptions columnFamilyOptions1 =
new ColumnFamilyOptions().setMinBlobSize(minBlobSize).setEnableBlobFiles(true);
final ColumnFamilyOptions columnFamilyOptions2 =
new ColumnFamilyOptions().setMinBlobSize(minBlobSize).setEnableBlobFiles(false)) {
final ColumnFamilyDescriptor columnFamilyDescriptor1 =
new ColumnFamilyDescriptor("column_family_1".getBytes(UTF_8), columnFamilyOptions1);
final ColumnFamilyDescriptor columnFamilyDescriptor2 =
new ColumnFamilyDescriptor("column_family_2".getBytes(UTF_8), columnFamilyOptions2);
// Create the first column family with blob options
db.createColumnFamily(columnFamilyDescriptor1);
// Create the second column family with not-blob options
db.createColumnFamily(columnFamilyDescriptor2);
}
}
// Now re-open after auto-close - at this point the CF options we use are recognized.
try (final ColumnFamilyOptions columnFamilyOptions1 =
new ColumnFamilyOptions().setMinBlobSize(minBlobSize).setEnableBlobFiles(true);
final ColumnFamilyOptions columnFamilyOptions2 =
new ColumnFamilyOptions().setMinBlobSize(minBlobSize).setEnableBlobFiles(false)) {
assertThat(columnFamilyOptions1.enableBlobFiles()).isEqualTo(true);
assertThat(columnFamilyOptions1.minBlobSize()).isEqualTo(minBlobSize);
assertThat(columnFamilyOptions2.enableBlobFiles()).isEqualTo(false);
assertThat(columnFamilyOptions1.minBlobSize()).isEqualTo(minBlobSize);
final ColumnFamilyDescriptor columnFamilyDescriptor1 =
new ColumnFamilyDescriptor("column_family_1".getBytes(UTF_8), columnFamilyOptions1);
final ColumnFamilyDescriptor columnFamilyDescriptor2 =
new ColumnFamilyDescriptor("column_family_2".getBytes(UTF_8), columnFamilyOptions2);
columnFamilyDescriptors = new ArrayList<>();
columnFamilyDescriptors.add(columnFamilyDescriptor0);
columnFamilyDescriptors.add(columnFamilyDescriptor1);
columnFamilyDescriptors.add(columnFamilyDescriptor2);
columnFamilyHandles = new ArrayList<>();
assertThat(columnFamilyDescriptor1.getOptions().enableBlobFiles()).isEqualTo(true);
assertThat(columnFamilyDescriptor2.getOptions().enableBlobFiles()).isEqualTo(false);
try (final DBOptions dbOptions = new DBOptions();
final RocksDB db = RocksDB.open(dbOptions, dbFolder.getRoot().getAbsolutePath(),
columnFamilyDescriptors, columnFamilyHandles)) {
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder1 =
db.getOptions(columnFamilyHandles.get(1));
assertThat(builder1.enableBlobFiles()).isEqualTo(true);
assertThat(builder1.minBlobSize()).isEqualTo(minBlobSize);
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder2 =
db.getOptions(columnFamilyHandles.get(2));
assertThat(builder2.enableBlobFiles()).isEqualTo(false);
assertThat(builder2.minBlobSize()).isEqualTo(minBlobSize);
db.put(columnFamilyHandles.get(1), large_key("column_family_1_k2"),
large_value("column_family_1_k2"));
db.flush(new FlushOptions().setWaitForFlush(true), columnFamilyHandles.get(1));
assertThat(countDBFiles(".blob")).isEqualTo(1);
db.put(columnFamilyHandles.get(2), large_key("column_family_2_k2"),
large_value("column_family_2_k2"));
db.flush(new FlushOptions().setWaitForFlush(true), columnFamilyHandles.get(2));
assertThat(countDBFiles(".blob")).isEqualTo(1);
}
}
}
}

@ -59,23 +59,23 @@ public class MutableColumnFamilyOptionsTest {
@Test
public void mutableColumnFamilyOptions_toString() {
final String str = MutableColumnFamilyOptions
.builder()
.setWriteBufferSize(10)
.setInplaceUpdateNumLocks(5)
.setDisableAutoCompactions(true)
.setParanoidFileChecks(true)
.build()
.toString();
final String str = MutableColumnFamilyOptions.builder()
.setWriteBufferSize(10)
.setInplaceUpdateNumLocks(5)
.setDisableAutoCompactions(true)
.setParanoidFileChecks(true)
.setMaxBytesForLevelMultiplierAdditional(new int[] {2, 3, 5, 7, 11, 13})
.build()
.toString();
assertThat(str).isEqualTo("write_buffer_size=10;inplace_update_num_locks=5;"
+ "disable_auto_compactions=true;paranoid_file_checks=true");
+ "disable_auto_compactions=true;paranoid_file_checks=true;max_bytes_for_level_multiplier_additional=2:3:5:7:11:13");
}
@Test
public void mutableColumnFamilyOptions_parse() {
final String str = "write_buffer_size=10;inplace_update_num_locks=5;"
+ "disable_auto_compactions=true;paranoid_file_checks=true";
+ "disable_auto_compactions=true;paranoid_file_checks=true;max_bytes_for_level_multiplier_additional=2:{3}:{5}:{7}:{11}:{13}";
final MutableColumnFamilyOptionsBuilder builder =
MutableColumnFamilyOptions.parse(str);
@ -84,5 +84,78 @@ public class MutableColumnFamilyOptionsTest {
assertThat(builder.inplaceUpdateNumLocks()).isEqualTo(5);
assertThat(builder.disableAutoCompactions()).isEqualTo(true);
assertThat(builder.paranoidFileChecks()).isEqualTo(true);
assertThat(builder.maxBytesForLevelMultiplierAdditional())
.isEqualTo(new int[] {2, 3, 5, 7, 11, 13});
}
/**
* Extended parsing test to deal with all the options which C++ may return.
* We have canned a set of options returned by {RocksDB#getOptions}
*/
@Test
public void mutableColumnFamilyOptions_parse_getOptions_output() {
final String optionsString =
"bottommost_compression=kDisableCompressionOption; sample_for_compression=0; "
+ "blob_garbage_collection_age_cutoff=0.250000; arena_block_size=1048576; enable_blob_garbage_collection=false; "
+ "level0_stop_writes_trigger=36; min_blob_size=65536; "
+ "compaction_options_universal={allow_trivial_move=false;stop_style=kCompactionStopStyleTotalSize;min_merge_width=2;"
+ "compression_size_percent=-1;max_size_amplification_percent=200;max_merge_width=4294967295;size_ratio=1;}; "
+ "target_file_size_base=67108864; max_bytes_for_level_base=268435456; memtable_whole_key_filtering=false; "
+ "soft_pending_compaction_bytes_limit=68719476736; blob_compression_type=kNoCompression; max_write_buffer_number=2; "
+ "ttl=2592000; compaction_options_fifo={allow_compaction=false;age_for_warm=0;max_table_files_size=1073741824;}; "
+ "check_flush_compaction_key_order=true; max_successive_merges=0; inplace_update_num_locks=10000; "
+ "bottommost_compression_opts={enabled=false;parallel_threads=1;zstd_max_train_bytes=0;max_dict_bytes=0;"
+ "strategy=0;max_dict_buffer_bytes=0;level=32767;window_bits=-14;}; "
+ "target_file_size_multiplier=1; max_bytes_for_level_multiplier_additional=5:{7}:{9}:{11}:{13}:{15}:{17}; "
+ "enable_blob_files=true; level0_slowdown_writes_trigger=20; compression=kLZ4HCCompression; level0_file_num_compaction_trigger=4; "
+ "blob_file_size=268435456; prefix_extractor=nullptr; max_bytes_for_level_multiplier=10.000000; write_buffer_size=67108864; "
+ "disable_auto_compactions=false; max_compaction_bytes=1677721600; memtable_huge_page_size=0; "
+ "compression_opts={enabled=false;parallel_threads=1;zstd_max_train_bytes=0;max_dict_bytes=0;strategy=0;max_dict_buffer_bytes=0;"
+ "level=32767;window_bits=-14;}; "
+ "hard_pending_compaction_bytes_limit=274877906944; periodic_compaction_seconds=0; paranoid_file_checks=true; "
+ "memtable_prefix_bloom_size_ratio=7.500000; max_sequential_skip_in_iterations=8; report_bg_io_stats=true; "
+ "compaction_pri=kMinOverlappingRatio; compaction_style=kCompactionStyleLevel; memtable_factory=SkipListFactory; "
+ "comparator=leveldb.BytewiseComparator; bloom_locality=0; compaction_filter_factory=nullptr; "
+ "min_write_buffer_number_to_merge=1; max_write_buffer_number_to_maintain=0; compaction_filter=nullptr; merge_operator=nullptr; "
+ "num_levels=7; optimize_filters_for_hits=false; force_consistency_checks=true; table_factory=BlockBasedTable; "
+ "max_write_buffer_size_to_maintain=0; memtable_insert_with_hint_prefix_extractor=nullptr; level_compaction_dynamic_level_bytes=false; "
+ "inplace_update_support=false;";
MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder cf =
MutableColumnFamilyOptions.parse(optionsString, true);
// Check the values from the parsed string which are column family options
assertThat(cf.blobGarbageCollectionAgeCutoff()).isEqualTo(0.25);
assertThat(cf.arenaBlockSize()).isEqualTo(1048576);
assertThat(cf.enableBlobGarbageCollection()).isEqualTo(false);
assertThat(cf.level0StopWritesTrigger()).isEqualTo(36);
assertThat(cf.minBlobSize()).isEqualTo(65536);
assertThat(cf.targetFileSizeBase()).isEqualTo(67108864);
assertThat(cf.maxBytesForLevelBase()).isEqualTo(268435456);
assertThat(cf.softPendingCompactionBytesLimit()).isEqualTo(68719476736L);
assertThat(cf.blobCompressionType()).isEqualTo(CompressionType.NO_COMPRESSION);
assertThat(cf.maxWriteBufferNumber()).isEqualTo(2);
assertThat(cf.ttl()).isEqualTo(2592000);
assertThat(cf.maxSuccessiveMerges()).isEqualTo(0);
assertThat(cf.inplaceUpdateNumLocks()).isEqualTo(10000);
assertThat(cf.targetFileSizeMultiplier()).isEqualTo(1);
assertThat(cf.maxBytesForLevelMultiplierAdditional())
.isEqualTo(new int[] {5, 7, 9, 11, 13, 15, 17});
assertThat(cf.enableBlobFiles()).isEqualTo(true);
assertThat(cf.level0SlowdownWritesTrigger()).isEqualTo(20);
assertThat(cf.compressionType()).isEqualTo(CompressionType.LZ4HC_COMPRESSION);
assertThat(cf.level0FileNumCompactionTrigger()).isEqualTo(4);
assertThat(cf.blobFileSize()).isEqualTo(268435456);
assertThat(cf.maxBytesForLevelMultiplier()).isEqualTo(10.0);
assertThat(cf.writeBufferSize()).isEqualTo(67108864);
assertThat(cf.disableAutoCompactions()).isEqualTo(false);
assertThat(cf.maxCompactionBytes()).isEqualTo(1677721600);
assertThat(cf.memtableHugePageSize()).isEqualTo(0);
assertThat(cf.hardPendingCompactionBytesLimit()).isEqualTo(274877906944L);
assertThat(cf.periodicCompactionSeconds()).isEqualTo(0);
assertThat(cf.paranoidFileChecks()).isEqualTo(true);
assertThat(cf.memtablePrefixBloomSizeRatio()).isEqualTo(7.5);
assertThat(cf.maxSequentialSkipInIterations()).isEqualTo(8);
assertThat(cf.reportBgIoStats()).isEqualTo(true);
}
}

@ -0,0 +1,385 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
package org.rocksdb;
import static java.nio.charset.StandardCharsets.UTF_8;
import static org.assertj.core.api.Assertions.assertThat;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
public class MutableOptionsGetSetTest {
final int minBlobSize = 65536;
@Rule public TemporaryFolder dbFolder = new TemporaryFolder();
/**
* Validate the round-trip of blob options into and out of the C++ core of RocksDB
* From CF options on CF Creation to {RocksDB#getOptions}
* Uses 2x column families with different values for their options.
* NOTE that some constraints are applied to the options in the C++ core,
* e.g. on {ColumnFamilyOptions#setMemtablePrefixBloomSizeRatio}
*
* @throws RocksDBException if the database throws an exception
*/
@Test
public void testGetMutableBlobOptionsAfterCreate() throws RocksDBException {
final ColumnFamilyOptions columnFamilyOptions0 = new ColumnFamilyOptions();
final ColumnFamilyDescriptor columnFamilyDescriptor0 =
new ColumnFamilyDescriptor("default".getBytes(UTF_8), columnFamilyOptions0);
final List<ColumnFamilyDescriptor> columnFamilyDescriptors =
Collections.singletonList(columnFamilyDescriptor0);
final List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
try (final DBOptions dbOptions = new DBOptions().setCreateIfMissing(true);
final RocksDB db = RocksDB.open(dbOptions, dbFolder.getRoot().getAbsolutePath(),
columnFamilyDescriptors, columnFamilyHandles)) {
try (final ColumnFamilyOptions columnFamilyOptions1 =
new ColumnFamilyOptions()
.setMinBlobSize(minBlobSize)
.setEnableBlobFiles(true)
.setArenaBlockSize(42)
.setMemtablePrefixBloomSizeRatio(0.17)
.setMemtableHugePageSize(3)
.setMaxSuccessiveMerges(4)
.setMaxWriteBufferNumber(12)
.setInplaceUpdateNumLocks(16)
.setDisableAutoCompactions(false)
.setSoftPendingCompactionBytesLimit(112)
.setHardPendingCompactionBytesLimit(280)
.setLevel0FileNumCompactionTrigger(200)
.setLevel0SlowdownWritesTrigger(312)
.setLevel0StopWritesTrigger(584)
.setMaxCompactionBytes(12)
.setTargetFileSizeBase(99)
.setTargetFileSizeMultiplier(112)
.setMaxSequentialSkipInIterations(50)
.setReportBgIoStats(true);
final ColumnFamilyOptions columnFamilyOptions2 =
new ColumnFamilyOptions()
.setMinBlobSize(minBlobSize)
.setEnableBlobFiles(false)
.setArenaBlockSize(42)
.setMemtablePrefixBloomSizeRatio(0.236)
.setMemtableHugePageSize(8)
.setMaxSuccessiveMerges(12)
.setMaxWriteBufferNumber(22)
.setInplaceUpdateNumLocks(160)
.setDisableAutoCompactions(true)
.setSoftPendingCompactionBytesLimit(1124)
.setHardPendingCompactionBytesLimit(2800)
.setLevel0FileNumCompactionTrigger(2000)
.setLevel0SlowdownWritesTrigger(5840)
.setLevel0StopWritesTrigger(31200)
.setMaxCompactionBytes(112)
.setTargetFileSizeBase(999)
.setTargetFileSizeMultiplier(1120)
.setMaxSequentialSkipInIterations(24)
.setReportBgIoStats(true)) {
final ColumnFamilyDescriptor columnFamilyDescriptor1 =
new ColumnFamilyDescriptor("column_family_1".getBytes(UTF_8), columnFamilyOptions1);
final ColumnFamilyDescriptor columnFamilyDescriptor2 =
new ColumnFamilyDescriptor("column_family_2".getBytes(UTF_8), columnFamilyOptions2);
// Create the column family with blob options
final ColumnFamilyHandle columnFamilyHandle1 =
db.createColumnFamily(columnFamilyDescriptor1);
final ColumnFamilyHandle columnFamilyHandle2 =
db.createColumnFamily(columnFamilyDescriptor2);
// Check the getOptions() brings back the creation options for CF1
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder1 =
db.getOptions(columnFamilyHandle1);
assertThat(builder1.enableBlobFiles()).isEqualTo(true);
assertThat(builder1.minBlobSize()).isEqualTo(minBlobSize);
assertThat(builder1.arenaBlockSize()).isEqualTo(42);
assertThat(builder1.memtableHugePageSize()).isEqualTo(3);
assertThat(builder1.memtablePrefixBloomSizeRatio()).isEqualTo(0.17);
assertThat(builder1.maxSuccessiveMerges()).isEqualTo(4);
assertThat(builder1.maxWriteBufferNumber()).isEqualTo(12);
assertThat(builder1.inplaceUpdateNumLocks()).isEqualTo(16);
assertThat(builder1.disableAutoCompactions()).isEqualTo(false);
assertThat(builder1.softPendingCompactionBytesLimit()).isEqualTo(112);
assertThat(builder1.hardPendingCompactionBytesLimit()).isEqualTo(280);
assertThat(builder1.level0FileNumCompactionTrigger()).isEqualTo(200);
assertThat(builder1.level0SlowdownWritesTrigger()).isEqualTo(312);
assertThat(builder1.level0StopWritesTrigger()).isEqualTo(584);
assertThat(builder1.maxCompactionBytes()).isEqualTo(12);
assertThat(builder1.targetFileSizeBase()).isEqualTo(99);
assertThat(builder1.targetFileSizeMultiplier()).isEqualTo(112);
assertThat(builder1.maxSequentialSkipInIterations()).isEqualTo(50);
assertThat(builder1.reportBgIoStats()).isEqualTo(true);
// Check the getOptions() brings back the creation options for CF2
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder2 =
db.getOptions(columnFamilyHandle2);
assertThat(builder2.enableBlobFiles()).isEqualTo(false);
assertThat(builder2.minBlobSize()).isEqualTo(minBlobSize);
assertThat(builder2.arenaBlockSize()).isEqualTo(42);
assertThat(builder2.memtableHugePageSize()).isEqualTo(8);
assertThat(builder2.memtablePrefixBloomSizeRatio()).isEqualTo(0.236);
assertThat(builder2.maxSuccessiveMerges()).isEqualTo(12);
assertThat(builder2.maxWriteBufferNumber()).isEqualTo(22);
assertThat(builder2.inplaceUpdateNumLocks()).isEqualTo(160);
assertThat(builder2.disableAutoCompactions()).isEqualTo(true);
assertThat(builder2.softPendingCompactionBytesLimit()).isEqualTo(1124);
assertThat(builder2.hardPendingCompactionBytesLimit()).isEqualTo(2800);
assertThat(builder2.level0FileNumCompactionTrigger()).isEqualTo(2000);
assertThat(builder2.level0SlowdownWritesTrigger()).isEqualTo(5840);
assertThat(builder2.level0StopWritesTrigger()).isEqualTo(31200);
assertThat(builder2.maxCompactionBytes()).isEqualTo(112);
assertThat(builder2.targetFileSizeBase()).isEqualTo(999);
assertThat(builder2.targetFileSizeMultiplier()).isEqualTo(1120);
assertThat(builder2.maxSequentialSkipInIterations()).isEqualTo(24);
assertThat(builder2.reportBgIoStats()).isEqualTo(true);
}
}
}
/**
* Validate the round-trip of blob options into and out of the C++ core of RocksDB
* From {RocksDB#setOptions} to {RocksDB#getOptions}
* Uses 2x column families with different values for their options.
* NOTE that some constraints are applied to the options in the C++ core,
* e.g. on {ColumnFamilyOptions#setMemtablePrefixBloomSizeRatio}
*
* @throws RocksDBException if a database access has an error
*/
@Test
public void testGetMutableBlobOptionsAfterSetCF() throws RocksDBException {
final ColumnFamilyOptions columnFamilyOptions0 = new ColumnFamilyOptions();
final ColumnFamilyDescriptor columnFamilyDescriptor0 =
new ColumnFamilyDescriptor("default".getBytes(UTF_8), columnFamilyOptions0);
final List<ColumnFamilyDescriptor> columnFamilyDescriptors =
Collections.singletonList(columnFamilyDescriptor0);
final List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
try (final DBOptions dbOptions = new DBOptions().setCreateIfMissing(true);
final RocksDB db = RocksDB.open(dbOptions, dbFolder.getRoot().getAbsolutePath(),
columnFamilyDescriptors, columnFamilyHandles)) {
try (final ColumnFamilyOptions columnFamilyOptions1 = new ColumnFamilyOptions();
final ColumnFamilyOptions columnFamilyOptions2 = new ColumnFamilyOptions()) {
final ColumnFamilyDescriptor columnFamilyDescriptor1 =
new ColumnFamilyDescriptor("column_family_1".getBytes(UTF_8), columnFamilyOptions1);
final ColumnFamilyDescriptor columnFamilyDescriptor2 =
new ColumnFamilyDescriptor("column_family_2".getBytes(UTF_8), columnFamilyOptions2);
// Create the column family with blob options
final ColumnFamilyHandle columnFamilyHandle1 =
db.createColumnFamily(columnFamilyDescriptor1);
final ColumnFamilyHandle columnFamilyHandle2 =
db.createColumnFamily(columnFamilyDescriptor2);
db.flush(new FlushOptions().setWaitForFlush(true));
final MutableColumnFamilyOptions
.MutableColumnFamilyOptionsBuilder mutableColumnFamilyOptions1 =
MutableColumnFamilyOptions.builder()
.setMinBlobSize(minBlobSize)
.setEnableBlobFiles(true)
.setArenaBlockSize(42)
.setMemtablePrefixBloomSizeRatio(0.17)
.setMemtableHugePageSize(3)
.setMaxSuccessiveMerges(4)
.setMaxWriteBufferNumber(12)
.setInplaceUpdateNumLocks(16)
.setDisableAutoCompactions(false)
.setSoftPendingCompactionBytesLimit(112)
.setHardPendingCompactionBytesLimit(280)
.setLevel0FileNumCompactionTrigger(200)
.setLevel0SlowdownWritesTrigger(312)
.setLevel0StopWritesTrigger(584)
.setMaxCompactionBytes(12)
.setTargetFileSizeBase(99)
.setTargetFileSizeMultiplier(112);
db.setOptions(columnFamilyHandle1, mutableColumnFamilyOptions1.build());
// Check the getOptions() brings back the creation options for CF1
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder1 =
db.getOptions(columnFamilyHandle1);
assertThat(builder1.enableBlobFiles()).isEqualTo(true);
assertThat(builder1.minBlobSize()).isEqualTo(minBlobSize);
assertThat(builder1.arenaBlockSize()).isEqualTo(42);
assertThat(builder1.memtableHugePageSize()).isEqualTo(3);
assertThat(builder1.memtablePrefixBloomSizeRatio()).isEqualTo(0.17);
assertThat(builder1.maxSuccessiveMerges()).isEqualTo(4);
assertThat(builder1.maxWriteBufferNumber()).isEqualTo(12);
assertThat(builder1.inplaceUpdateNumLocks()).isEqualTo(16);
assertThat(builder1.disableAutoCompactions()).isEqualTo(false);
assertThat(builder1.softPendingCompactionBytesLimit()).isEqualTo(112);
assertThat(builder1.hardPendingCompactionBytesLimit()).isEqualTo(280);
assertThat(builder1.level0FileNumCompactionTrigger()).isEqualTo(200);
assertThat(builder1.level0SlowdownWritesTrigger()).isEqualTo(312);
assertThat(builder1.level0StopWritesTrigger()).isEqualTo(584);
assertThat(builder1.maxCompactionBytes()).isEqualTo(12);
assertThat(builder1.targetFileSizeBase()).isEqualTo(99);
assertThat(builder1.targetFileSizeMultiplier()).isEqualTo(112);
final MutableColumnFamilyOptions
.MutableColumnFamilyOptionsBuilder mutableColumnFamilyOptions2 =
MutableColumnFamilyOptions.builder()
.setMinBlobSize(minBlobSize)
.setEnableBlobFiles(false)
.setArenaBlockSize(42)
.setMemtablePrefixBloomSizeRatio(0.236)
.setMemtableHugePageSize(8)
.setMaxSuccessiveMerges(12)
.setMaxWriteBufferNumber(22)
.setInplaceUpdateNumLocks(160)
.setDisableAutoCompactions(true)
.setSoftPendingCompactionBytesLimit(1124)
.setHardPendingCompactionBytesLimit(2800)
.setLevel0FileNumCompactionTrigger(2000)
.setLevel0SlowdownWritesTrigger(5840)
.setLevel0StopWritesTrigger(31200)
.setMaxCompactionBytes(112)
.setTargetFileSizeBase(999)
.setTargetFileSizeMultiplier(1120);
db.setOptions(columnFamilyHandle2, mutableColumnFamilyOptions2.build());
// Check the getOptions() brings back the creation options for CF2
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder2 =
db.getOptions(columnFamilyHandle2);
assertThat(builder2.enableBlobFiles()).isEqualTo(false);
assertThat(builder2.minBlobSize()).isEqualTo(minBlobSize);
assertThat(builder2.arenaBlockSize()).isEqualTo(42);
assertThat(builder2.memtableHugePageSize()).isEqualTo(8);
assertThat(builder2.memtablePrefixBloomSizeRatio()).isEqualTo(0.236);
assertThat(builder2.maxSuccessiveMerges()).isEqualTo(12);
assertThat(builder2.maxWriteBufferNumber()).isEqualTo(22);
assertThat(builder2.inplaceUpdateNumLocks()).isEqualTo(160);
assertThat(builder2.disableAutoCompactions()).isEqualTo(true);
assertThat(builder2.softPendingCompactionBytesLimit()).isEqualTo(1124);
assertThat(builder2.hardPendingCompactionBytesLimit()).isEqualTo(2800);
assertThat(builder2.level0FileNumCompactionTrigger()).isEqualTo(2000);
assertThat(builder2.level0SlowdownWritesTrigger()).isEqualTo(5840);
assertThat(builder2.level0StopWritesTrigger()).isEqualTo(31200);
assertThat(builder2.maxCompactionBytes()).isEqualTo(112);
assertThat(builder2.targetFileSizeBase()).isEqualTo(999);
assertThat(builder2.targetFileSizeMultiplier()).isEqualTo(1120);
}
}
}
/**
* Validate the round-trip of blob options into and out of the C++ core of RocksDB
* From {RocksDB#setOptions} to {RocksDB#getOptions}
* Uses 2x column families with different values for their options.
* NOTE that some constraints are applied to the options in the C++ core,
* e.g. on {ColumnFamilyOptions#setMemtablePrefixBloomSizeRatio}
*
* @throws RocksDBException if a database access has an error
*/
@Test
public void testGetMutableBlobOptionsAfterSet() throws RocksDBException {
final ColumnFamilyOptions columnFamilyOptions0 = new ColumnFamilyOptions();
final ColumnFamilyDescriptor columnFamilyDescriptor0 =
new ColumnFamilyDescriptor("default".getBytes(UTF_8), columnFamilyOptions0);
final List<ColumnFamilyDescriptor> columnFamilyDescriptors =
Collections.singletonList(columnFamilyDescriptor0);
final List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
try (final DBOptions dbOptions = new DBOptions().setCreateIfMissing(true);
final RocksDB db = RocksDB.open(dbOptions, dbFolder.getRoot().getAbsolutePath(),
columnFamilyDescriptors, columnFamilyHandles)) {
final MutableColumnFamilyOptions
.MutableColumnFamilyOptionsBuilder mutableColumnFamilyOptions =
MutableColumnFamilyOptions.builder()
.setMinBlobSize(minBlobSize)
.setEnableBlobFiles(true)
.setArenaBlockSize(42)
.setMemtablePrefixBloomSizeRatio(0.17)
.setMemtableHugePageSize(3)
.setMaxSuccessiveMerges(4)
.setMaxWriteBufferNumber(12)
.setInplaceUpdateNumLocks(16)
.setDisableAutoCompactions(false)
.setSoftPendingCompactionBytesLimit(112)
.setHardPendingCompactionBytesLimit(280)
.setLevel0FileNumCompactionTrigger(200)
.setLevel0SlowdownWritesTrigger(312)
.setLevel0StopWritesTrigger(584)
.setMaxCompactionBytes(12)
.setTargetFileSizeBase(99)
.setTargetFileSizeMultiplier(112);
db.setOptions(mutableColumnFamilyOptions.build());
// Check the getOptions() brings back the creation options for CF1
final MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder builder1 = db.getOptions();
assertThat(builder1.enableBlobFiles()).isEqualTo(true);
assertThat(builder1.minBlobSize()).isEqualTo(minBlobSize);
assertThat(builder1.arenaBlockSize()).isEqualTo(42);
assertThat(builder1.memtableHugePageSize()).isEqualTo(3);
assertThat(builder1.memtablePrefixBloomSizeRatio()).isEqualTo(0.17);
assertThat(builder1.maxSuccessiveMerges()).isEqualTo(4);
assertThat(builder1.maxWriteBufferNumber()).isEqualTo(12);
assertThat(builder1.inplaceUpdateNumLocks()).isEqualTo(16);
assertThat(builder1.disableAutoCompactions()).isEqualTo(false);
assertThat(builder1.softPendingCompactionBytesLimit()).isEqualTo(112);
assertThat(builder1.hardPendingCompactionBytesLimit()).isEqualTo(280);
assertThat(builder1.level0FileNumCompactionTrigger()).isEqualTo(200);
assertThat(builder1.level0SlowdownWritesTrigger()).isEqualTo(312);
assertThat(builder1.level0StopWritesTrigger()).isEqualTo(584);
assertThat(builder1.maxCompactionBytes()).isEqualTo(12);
assertThat(builder1.targetFileSizeBase()).isEqualTo(99);
assertThat(builder1.targetFileSizeMultiplier()).isEqualTo(112);
}
}
@Test
public void testGetMutableDBOptionsAfterSet() throws RocksDBException {
final ColumnFamilyOptions columnFamilyOptions0 = new ColumnFamilyOptions();
final ColumnFamilyDescriptor columnFamilyDescriptor0 =
new ColumnFamilyDescriptor("default".getBytes(UTF_8), columnFamilyOptions0);
final List<ColumnFamilyDescriptor> columnFamilyDescriptors =
Collections.singletonList(columnFamilyDescriptor0);
final List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
try (final DBOptions dbOptions = new DBOptions().setCreateIfMissing(true);
final RocksDB db = RocksDB.open(dbOptions, dbFolder.getRoot().getAbsolutePath(),
columnFamilyDescriptors, columnFamilyHandles)) {
final MutableDBOptions.MutableDBOptionsBuilder mutableDBOptions =
MutableDBOptions.builder()
.setMaxBackgroundJobs(16)
.setAvoidFlushDuringShutdown(true)
.setWritableFileMaxBufferSize(2097152)
.setDelayedWriteRate(67108864)
.setMaxTotalWalSize(16777216)
.setDeleteObsoleteFilesPeriodMicros(86400000000L)
.setStatsDumpPeriodSec(1200)
.setStatsPersistPeriodSec(7200)
.setStatsHistoryBufferSize(6291456)
.setMaxOpenFiles(8)
.setBytesPerSync(4194304)
.setWalBytesPerSync(1048576)
.setStrictBytesPerSync(true)
.setCompactionReadaheadSize(1024);
db.setDBOptions(mutableDBOptions.build());
final MutableDBOptions.MutableDBOptionsBuilder getBuilder = db.getDBOptions();
assertThat(getBuilder.maxBackgroundJobs()).isEqualTo(16); // 4
assertThat(getBuilder.avoidFlushDuringShutdown()).isEqualTo(true); // false
assertThat(getBuilder.writableFileMaxBufferSize()).isEqualTo(2097152); // 1048576
assertThat(getBuilder.delayedWriteRate()).isEqualTo(67108864); // 16777216
assertThat(getBuilder.maxTotalWalSize()).isEqualTo(16777216);
assertThat(getBuilder.deleteObsoleteFilesPeriodMicros())
.isEqualTo(86400000000L); // 21600000000
assertThat(getBuilder.statsDumpPeriodSec()).isEqualTo(1200); // 600
assertThat(getBuilder.statsPersistPeriodSec()).isEqualTo(7200); // 600
assertThat(getBuilder.statsHistoryBufferSize()).isEqualTo(6291456); // 1048576
assertThat(getBuilder.maxOpenFiles()).isEqualTo(8); //-1
assertThat(getBuilder.bytesPerSync()).isEqualTo(4194304); // 1048576
assertThat(getBuilder.walBytesPerSync()).isEqualTo(1048576); // 0
assertThat(getBuilder.strictBytesPerSync()).isEqualTo(true); // false
assertThat(getBuilder.compactionReadaheadSize()).isEqualTo(1024); // 0
}
}
}

@ -0,0 +1,79 @@
# How RocksDB Options and their Java Wrappers Work
Options in RocksDB come in many different flavours. This is an attempt at a taxonomy and explanation.
## RocksDB Options
Initially, I believe, RocksDB had only database options. I don't know if any of these were mutable. Column families came later. Read on to understand the terminology.
So to begin, one sets up a collection of options and starts/creates a database with these options. That's a useful way to think about it, because from a Java point-of-view (and I didn't realise this initially and got very confused), despite making native calls to C++, the `API`s are just manipulating a native C++ configuration object. This object is just a record of configuration, and it must later be passed to the database (at create or open time) in order to apply the options.
### Database versus Column Family
The concept of the *column family* or `CF` is widespread within RocksDB. I think of it as a data namespace, but conveniently transactions can operate across these namespaces. The concept of a default column family exists, and when operations do not refer to a particular `CF`, it refers to the default.
We raise this w.r.t. options because many options, perhaps most that users encounter, are *column family options*. That is to say they apply individually to a particular column family, or to the default column family. Crucially also, many/most/all of these same options are exposed as *database options* and then apply as the default for column families which do not have the option set explicitly. Obviously some database options are naturally database-wide; they apply to the operation of the database and don't make any sense applied to a column family.
### Mutability
There are 2 kinds of options
- Mutable options
- Immutable options. We name these in contrast to the mutable ones, but they are usually referred to unqualified.
Mutable options are those which can be changed on a running `RocksDB` instance. Immutable options can only be configured prior to the start of a database. Of course, we can configure the immutable options at this time too; The entirety of options is a strict superset of the mutable options.
Mutable options (whether column-family specific or database-wide) are manipulated at runtime with builders, so we have `MutableDBOptions.MutableDBOptionsBuilder` and `MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder` which share tooling classes/hierarchy and maintain and manipulate the relevant options as a `(key,value)` map.
Mutable options are then passed using `setOptions()` and `setDBOptions()` methods on the live RocksDB, and then take effect immediately (depending on the semantics of the option) on the database.
### Advanced
There are 2 classes of options
- Advanced options
- Non-advanced options
It's not clear to me what the conceptual distinction is between advanced and not. However, the Java code takes care to reflect it from the underlying C++.
This leads to 2 separate type hierarchies within column family options, one for each `class` of options. The `kind`s are represented by where the options appear in their hierarchy.
```java
interface ColumnFamilyOptionsInterface<T extends ColumnFamilyOptionsInterface<T>>
extends AdvancedColumnFamilyOptionsInterface<T>
interface MutableColumnFamilyOptionsInterface<T extends MutableColumnFamilyOptionsInterface<T>>
extends AdvancedMutableColumnFamilyOptionsInterface<T>
```
And then there is ultimately a single concrete implementation class for CF options:
```java
class ColumnFamilyOptions extends RocksObject
implements ColumnFamilyOptionsInterface<ColumnFamilyOptions>,
MutableColumnFamilyOptionsInterface<ColumnFamilyOptions>
```
as there is a single concrete implementation class for DB options:
```java
class DBOptions extends RocksObject
implements DBOptionsInterface<DBOptions>,
MutableDBOptionsInterface<DBOptions>
```
Interestingly `DBOptionsInterface` doesn't extend `MutableDBOptionsInterface`, if only in order to disrupt our belief in consistent basic laws of the Universe.
## Startup/Creation Options
```java
class Options extends RocksObject
implements DBOptionsInterface<Options>,
MutableDBOptionsInterface<Options>,
ColumnFamilyOptionsInterface<Options>,
MutableColumnFamilyOptionsInterface<Options>
```
### Example - Blob Options
The `enable_blob_files` and `min_blob_size` options are per-column-family, and are mutable. The options also appear in the unqualified database options. So by initial configuration, we can set up a RocksDB database where for every `(key,value)` with a value of size at least `min_blob_size`, the value is written (indirected) to a blob file. Blobs may share a blob file, subject to the configuration values set. Later, using the `MutableColumnFamilyOptionsInterface` of the `ColumnFamilyOptions`, we can choose to turn this off (`enable_blob_files=false`) , or alter the `min_blob_size` for the default column family, or any other column family. It seems to me that we cannot, though, mutate the column family options for all column families using the
`setOptions()` mechanism, either for all existing column families or for all future column families; but maybe we can do the latter on a re-`open()/create()'
Loading…
Cancel
Save