Tag:
Branch:
Tree:
8e7aa62813
main
oxigraph-8.1.1
oxigraph-8.3.2
oxigraph-main
${ noResults }
3 Commits (8e7aa628138bf2ba3fcc26155beac0e0af99fd08)
Author | SHA1 | Message | Date |
---|---|---|---|
Levi Tamasi | 2cbb61eadb |
Make clang-analyzer happy (#5821)
Summary: clang-analyzer has uncovered a bunch of places where the code is relying on pointers being valid and one case (in VectorIterator) where a moved-from object is being used: In file included from db/range_tombstone_fragmenter.cc:17: ./util/vector_iterator.h:23:18: warning: Method called on moved-from object 'keys' of type 'std::vector' current_(keys.size()) { ^~~~~~~~~~~ 1 warning generated. utilities/persistent_cache/block_cache_tier_file.cc:39:14: warning: Called C++ object pointer is null Status s = env->NewRandomAccessFile(filepath, file, opt); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ utilities/persistent_cache/block_cache_tier_file.cc:47:19: warning: Called C++ object pointer is null Status status = env_->GetFileSize(Path(), size); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ utilities/persistent_cache/block_cache_tier_file.cc:290:14: warning: Called C++ object pointer is null Status s = env_->FileExists(Path()); ^~~~~~~~~~~~~~~~~~~~~~~~ utilities/persistent_cache/block_cache_tier_file.cc:363:35: warning: Called C++ object pointer is null CacheWriteBuffer* const buf = alloc_->Allocate(); ^~~~~~~~~~~~~~~~~~ utilities/persistent_cache/block_cache_tier_file.cc:399:41: warning: Called C++ object pointer is null const uint64_t file_off = buf_doff_ * alloc_->BufferSize(); ^~~~~~~~~~~~~~~~~~~~ utilities/persistent_cache/block_cache_tier_file.cc:463:33: warning: Called C++ object pointer is null size_t start_idx = lba.off_ / alloc_->BufferSize(); ^~~~~~~~~~~~~~~~~~~~ utilities/persistent_cache/block_cache_tier_file.cc:515:5: warning: Called C++ object pointer is null alloc_->Deallocate(bufs_[i]); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ 7 warnings generated. ar: creating librocksdb_debug.a utilities/memory/memory_test.cc:68:25: warning: Called C++ object pointer is null cache_set->insert(db->GetDBOptions().row_cache.get()); ^~~~~~~~~~~~~~~~~~ 1 warning generated. The patch fixes these by adding assertions and explicitly passing in zero when initializing VectorIterator::current_ (which preserves the existing behavior). Pull Request resolved: https://github.com/facebook/rocksdb/pull/5821 Test Plan: Ran make check and make analyze to make sure the warnings have disappeared. Differential Revision: D17455949 Pulled By: ltamasi fbshipit-source-id: 363619618ea649a0674287f9f3b3393e390571ee |
5 years ago |
Fosco Marotto | 6c2bf9e916 |
Add copyright headers per FB open-source checkup tool. (#5199)
Summary: internal task: T35568575 Pull Request resolved: https://github.com/facebook/rocksdb/pull/5199 Differential Revision: D14962794 Pulled By: gfosco fbshipit-source-id: 93838ede6d0235eaecff90d200faed9a8515bbbe |
6 years ago |
Abhishek Madan | 8c78348c77 |
Use only "local" range tombstones during Get (#4449)
Summary: Previously, range tombstones were accumulated from every level, which was necessary if a range tombstone in a higher level covered a key in a lower level. However, RangeDelAggregator::AddTombstones's complexity is based on the number of tombstones that are currently stored in it, which is wasteful in the Get case, where we only need to know the highest sequence number of range tombstones that cover the key from higher levels, and compute the highest covering sequence number at the current level. This change introduces this optimization, and removes the use of RangeDelAggregator from the Get path. In the benchmark results, the following command was used to initialize the database: ``` ./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8 ``` ...and the following command was used to measure read throughput: ``` ./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32 ``` The filluniquerandom command was only run once, and the resulting database was used to measure read performance before and after the PR. Both binaries were compiled with `DEBUG_LEVEL=0`. Readrandom results before PR: ``` readrandom : 4.544 micros/op 220090 ops/sec; 16.9 MB/s (63103 of 100000 found) ``` Readrandom results after PR: ``` readrandom : 11.147 micros/op 89707 ops/sec; 6.9 MB/s (63103 of 100000 found) ``` So it's actually slower right now, but this PR paves the way for future optimizations (see #4493). ---- Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449 Differential Revision: D10370575 Pulled By: abhimadan fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d |
6 years ago |