Can't clear space with volume.deleteEmpty
dimm0 opened this issue · comments
Describe the bug
The cluster is running out of space, and running commands to reclaim the space from deleted files is not helping
System Setup
- List the command line to start "weed master", "weed volume", "weed filer", "weed s3", "weed mount".
master -mdir=/data -ip=seaweed-master -ip.bind=0.0.0.0 -volumeSizeLimitMB=8000 -metrics.address=pushgateway:9091
volume -max=0 -mserver=seaweed-master:9333 -disk=hdd -dataCenter=ucmerced -rack=clu-fiona2.ucmerced.edu
filer -master seaweed-master:9333 -s3 -iam -concurrentUploadLimitMB=512
- OS version
Ubuntu server 20.04 LTS
- output of
weed version
version 30GB 3.64 b74e808 linux amd64
(was the same with 3.59)
- if using filer, show the content of
filer.toml
[filer.options]
[leveldb2]
enabled = false
[postgres2]
enabled = true
createTable = """
CREATE TABLE IF NOT EXISTS "%s" (
dirhash BIGINT,
name VARCHAR(65535),
directory VARCHAR(65535),
meta bytea,
PRIMARY KEY (dirhash, name)
);
"""
hostname = "filer-db"
port = 5432
username = "seaweed"
password = "..."
database = "seaweed"
sslmode = "disable"
schema = ""
connection_max_idle = 50
connection_max_open = 50
connection_max_lifetime_seconds = 0
# if insert/upsert failing, you can disable upsert or update query syntax to match your RDBMS syntax:
enableUpsert = true
upsertQuery = """INSERT INTO "%[1]s" (dirhash,name,directory,meta) VALUES($1,$2,$3,$4) ON CONFLICT (dirhash,name) DO UPDATE SET meta = EXCLUDED.meta WHERE "%[1]s".meta != EXCLUDED.meta"""
Expected behavior
Expecting the space in the cluster to be reclaimed when files are deleted. Currently in one device class users can't write, and the space seem to be not available.
I tried running volume.vacuum -garbageThreshold 0
followed by volume.deleteEmpty -force -quietFor 1m
, which deletes some volumes, but not the ones overfilled. (seems like the nvme device class is fine, but hdd one is not)
The volumes list is showing that many volumes only contain deleted files:
volume id:700917 size:8408711672 file_count:2199 delete_count:2199 deleted_byte_count:8408569832 version:3 modified_at_second:1712115438
volume id:702205 size:8398713728 file_count:2274 delete_count:2274 deleted_byte_count:8398566843 version:3 modified_at_second:1712115447
volume id:700748 size:8415647784 file_count:2174 delete_count:2174 deleted_byte_count:8415507552 version:3 modified_at_second:1712115437
volume id:701896 size:8411818000 file_count:2212 delete_count:2212 deleted_byte_count:8411675247 version:3 modified_at_second:1712115445
try volume.vacuum -garbageThreshold 0.01
I tried different numbers here already, not helping. And now returns immediately.
Can the reason for volume.deleteEmpty
not working be that "size" and "deleted_byte_count" don't match? While "file_count" and "delete_count" do match?
Not sure. Need to debug.
No progress yet?
No data to debug.
What data do you need? I attached the volume.list output in a file
volume.vacuum -garbageThreshold 0.01
needs to work first.
You can run 'weed -v=1 master` to see vacuum related logs.
size:8408711672 file_count:2199 delete_count:2199 deleted_byte_count:8408569832
Doesn't this indicate that vacuuming worked well and the data is marked as deleted? Or it will be gone after the vacuum?
Doesn't this indicate that vacuuming worked well and the data is marked as deleted? Or it will be gone after the vacuum?
After the vacuum, the file size should drop to 8 bytes, to be considered as empty.
Hmm, after the week I don't see those many deleted anymore.. I'll reopen when I have more info.
Now the vacuum is executed silently in case of errors, you can add error output to the console.