sled doesn't clean blobs
Dimchikkk opened this issue · comments
Just sharing the findings... feel free to close the issue.
I was inserting/removing data from pkv using sled backend but data folder always was growing in size, even when pkv.clear is called. I switched to rocksdb backend and now the size of data directory is adequate: it growth with new data added, and shrinks with removing data from db, with sled backend it is always growing.
Environment: Mac M1
interesting. If you add 10 000 items, then clear, then add 10 000 other items, is it twice as big as after the first 10 000 items, or the same size?
I guess it would probably make more sense to delete the backing store, then.
I am adding items through UI, so 10000 items is too big amount for quick testing for me... I'll test on 10 but heavy items :)
-
on init (empty store) size of bevy_pkv.sled folder is 8.0K
-
10 Doc items added: 40M
Schema is the following:
docs: HashMap<id, Doc> names: HashMap<id, String> tags: HashMap<id, Vec<String>> last_saved: id
-
pkv.clear().unwrap();
called: size stays the same 40M -
10 Doc items added: size is 80M
-
pkv.clear().unwrap();
called: size stays the same 80M -
re-run the app after clearing: 80M
I made similar test for rocksdb backend:
- on init (empty store) size of bevy_rocksdb_pkv folder is 76K
- 10 Doc items added: 20M
pkv.clear().unwrap();
called: 3.3M- 10 Doc items added: size is 20M
pkv.clear().unwrap();
called: 6.2M- re-run the app after clearing: 192K
Thanks for the detailed testing!
Hmm... maybe we should consider switching the default to rocksdb... though It would be really nice to have a well-maintained rust-based alternative backend, not sure if any exist at the moment, though.
Yep, I am also all for the rust-based backend, as long as it works as expected :)