facebook / rocksdb

A library that provides an embeddable, persistent key-value store for fast storage.

Home Page:http://rocksdb.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec'

bpan2020 opened this issue · comments

I am running db_bench to do a 'readwhilewriting' benchmark on an SSD drive. The statistic results show that the value of 'micros/op' is not equal to 1,000,000 divided by the value of 'ops/sec'.

Below is a snippet of the results.

Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Keys: 20 bytes each (+ 0 bytes user-defined timestamp)
Values: 800 bytes each (400 bytes after compression)
Entries: 3300000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 2580642.7 MB (estimated)
FileSize: 1321792.6 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1

DB path: [/output/f2fs/nvme3n1_f2fs/eval]
readwhilewriting : 1023.937 micros/op 31248 ops/sec; 21.1 MB/s (1520593 of 1758999 found)

Expected behavior

xxx micros/op = 1 000 000 / xxx ops/sec

Actual behavior

1023.937 != 1 000 000 / 31248

Steps to reproduce the behavior

Below is the command used to do the benchmark
./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=8 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=3300000000

Did you set a value via --threads ?
What version or commit of RocksDB are you using?

I can't reproduce this using a much smaller value for --num and RocksDB 7.8.3 with 1M / 128350 = 7.791
fillrandom : 7.791 micros/op 128350 ops/sec 7.791 seconds 1000000 operations; 100.4 MB/s

Also works fine using latest RocksDB as of ...
commit d8fb849 (HEAD -> main, origin/main, origin/HEAD)
Author: anand76 anand1976@users.noreply.github.com
Date: Fri Apr 19 19:13:31 2024 -0700

My command line:
./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=2 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=1000000 --db=/data/m/rx

I suppose you are using 32 thread.
'micros/op' is actually micros per op per thread.
31248*1023.937/1e6 is about 32, so I think you're using 32thread. Right?

I suppose you are using 32 thread. 'micros/op' is actually micros per op per thread. 31248*1023.937/1e6 is about 32, so I think you're using 32thread. Right?

Yes, I used 32 threads.

Did you set a value via --threads ? What version or commit of RocksDB are you using?

I can't reproduce this using a much smaller value for --num and RocksDB 7.8.3 with 1M / 128350 = 7.791 fillrandom : 7.791 micros/op 128350 ops/sec 7.791 seconds 1000000 operations; 100.4 MB/s

Also works fine using latest RocksDB as of ... commit d8fb849 (HEAD -> main, origin/main, origin/HEAD) Author: anand76 anand1976@users.noreply.github.com Date: Fri Apr 19 19:13:31 2024 -0700

My command line: ./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=2 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=fillrandom,stats --num=1000000 --db=/data/m/rx

Yes, I set it to 32 threads. The RocksDB version I used is v7.2.2.
Oh, sorry, I gave the wrong command. Here is the correct one.

./db_bench --key_size=20 --value_size=800 --target_file_size_base=134217728 --write_buffer_size=2147483648 --max_bytes_for_level_base=4294967296 --max_bytes_for_level_multiplier=4 --max_background_jobs=8 --max_background_compactions=8 --use_direct_io_for_flush_and_compaction --stats_dump_period_sec=15 --delete_obsolete_files_period_micros=30000000 --statistics --benchmarks=readwhilewriting,stats --use_existing_db --histogram --threads=32 --num=3300000000 --duration=1800