Bug: Poor performance handling binary payload
SeanEClarke opened this issue · comments
Sean Clarke commented
Describe the bug
I did a test uploading a binary file in 32Kb blocks and the load performance was very poor (PostgreSQL, Cassandra and ScyllaDB were much much faster).
~110MB file uploaded in 32Kb blocks (3260 rows):
cargo run 203.47s user 0.81s system 78% cpu 4:21.21 total
Steps to reproduce
Simple structure, something like:
#[derive(Debug, Serialize, Deserialize)]
struct Data {
sequence: Id,
name: String,
payload: Vec<u8>,
}
and then load data into it:
let mut file = tokio::fs::File::open("/tmp/some_data_100MB.bin").await?;
let mut data = [0; 0x8000];
let mut c = 0;
while file.read(&mut data).await? != 0 {
let created: Vec<Data> = db
.create("data")
.content(Data {
sequence: Id::Number(c),
name: "test".to_owned(),
payload: data.to_vec(),
})
.await?;
c += 1;
}
Expected behaviour
a lot quicker, ScyllaDB is in the order of a couple of seconds.
SurrealDB version
surreal 1.4.2 AMD64
Contact Details
No response
Is there an existing issue for this?
- I have searched the existing issues
Code of Conduct
- I agree to follow this project's Code of Conduct