cockroachdb / pebble

RocksDB/LevelDB inspired key-value database in Go

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

panic: slice bounds out of range

Fynnss opened this issue · comments

commented

When reading a very large kv( > 2G), there is a panic.

pebble version: v0.0.0-20230928194634-aa077af62593

image
commented

pebble/sstable/block.go

Lines 660 to 668 in 043c379

// Define f(-1) == false and f(n) == true.
// Invariant: f(index-1) == false, f(upper) == true.
upper := i.numRestarts
for index < upper {
h := int32(uint(index+upper) >> 1) // avoid overflow when computing h
// index ≤ h < upper
offset := decodeRestart(i.data[i.restarts+4*h:])
// For a restart point, there are 0 bytes shared with the previous key.
// The varint encoding of 0 occupies 1 byte.

commented

@itsbilal @jbowens add some logs. FYI

fmt.Printf("h: %d, index: %d, upper:%d, restarts: %d, len(i.data): %d", h, index, upper, i.restarts, len(i.data))

Here is the printed result.

h: 0, index: 0, upper:1, restarts: -2126429174, len(i.data): 2168538130

When a 2GB key appears in Pebble, what could be the situation? Below is the error message for the faulty SST:

size
file 2.0GB
data 2.0GB
blocks 1
index 31B
blocks 1
top-level 0B
filter 69B
raw-key 39B
raw-value 2.0GB
pinned-key 0
pinned-val 0
point-del-key-size 0
point-del-value-size 0
records 2
set 2
delete 0
delete-sized 0
range-delete 0
range-key-set 0
range-key-unset 0
range-key-delete 0
merge 0
pinned 0
index
key value comparer leveldb.BytewiseComparator
merger pebble.concatenate
filter rocksdb.BuiltinBloomFilter
compression Snappy
options window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0;
user properties
collectors []
rocksdb.block.based.table.prefix.filtering 0
rocksdb.block.based.table.whole.key.filtering 1
rocksdb.prefix.extractor.name nullptr