Incorrect disk size
0323pin opened this issue · comments
Hi, I assume you're referring to /dev/wd0a
- seems off by a factor 10?
Can you please post the entries you consider incorrect from /proc/mounts
so I can investigate?
Yes, /dev/wd0a
. The correct output from df
below on the screenshot
2022-01-23 21:21 > cat /proc/mounts | textview
READER
1 /dev/wd0a / ffs rw,noatime 0 0
2 tmpfs /tmp tmpfs rw 0 0
3 kernfs /kern kernfs rw 0 0
4 ptyfs /dev/pts ptyfs rw 0 0
5 procfs /proc proc rw 0 0
6 tmpfs /var/shm tmpfs rw 0 0
Can you please retest with latest main
? - I feel the i
in the units is now overkill, let me know what you think.
Yes, I can do it. I'll get back to you in an hour or so.
Ok, I've added the --debug
command line flag so we can see what dusage computes.
Below are my run results, look ok but please also check. Can you then please run dusage --debug
with the latest main?
~/git/dusage main » df mihai@galos 24/01/22 08:11:07
Filesystem 1K-blocks Used Available Use% Mounted on
udev 5520284 0 5520284 0% /dev
tmpfs 1114368 3104 1111264 1% /run
/dev/mapper/sdb5_crypt 474732584 103217684 347330120 23% /
tmpfs 5571836 27816 5544020 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 5571836 0 5571836 0% /sys/fs/cgroup
/dev/sdb1 4855100 248064 4340696 6% /boot
tmpfs 1114364 32 1114332 1% /run/user/1000
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~/git/dusage main » cargo run -- --debug mihai@galos 24/01/22 08:11:17
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/dusage --debug`
sysfs blocks: 4096 size: 0 available: 0
proc blocks: 4096 size: 0 available: 0
udev blocks: 4096 size: 5652770816 available: 5652770816
devpts blocks: 4096 size: 0 available: 0
tmpfs blocks: 4096 size: 1141112832 available: 1137934336
/dev/mapper/sdb5_crypt blocks: 4096 size: 486126166016 available: 355666042880
securityfs blocks: 4096 size: 0 available: 0
tmpfs blocks: 4096 size: 5705560064 available: 5677076480
tmpfs blocks: 4096 size: 5242880 available: 5238784
tmpfs blocks: 4096 size: 5705560064 available: 5705560064
cgroup2 blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
pstore blocks: 4096 size: 0 available: 0
none blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
cgroup blocks: 4096 size: 0 available: 0
systemd-1 blocks: 4096 size: 0 available: 0
hugetlbfs blocks: 2097152 size: 0 available: 0
mqueue blocks: 4096 size: 0 available: 0
debugfs blocks: 4096 size: 0 available: 0
tracefs blocks: 4096 size: 0 available: 0
fusectl blocks: 4096 size: 0 available: 0
configfs blocks: 4096 size: 0 available: 0
/dev/sdb1 blocks: 4096 size: 4971622400 available: 4444872704
binfmt_misc blocks: 4096 size: 0 available: 0
tmpfs blocks: 4096 size: 1141108736 available: 1141075968
gvfsd-fuse blocks: 4096 size: 0 available: 0
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/sdb1 4.6G 502.3M 4.1G 11% ■■■■■■■■■■■■■■■■■■■■ /boot
/dev/mapper/sdb5_crypt 452.7G 121.5G 331.2G 27% ■■■■■■■■■■■■■■■■■■■■ /
udev 5.3G 0 5.3G 0% ■■■■■■■■■■■■■■■■■■■■ /dev
tmpfs 1.1G 3.0M 1.1G 0% ■■■■■■■■■■■■■■■■■■■■ /run
tmpfs 5.3G 27.2M 5.3G 0% ■■■■■■■■■■■■■■■■■■■■ /dev/shm
tmpfs 5.0M 4.0K 5.0M 0% ■■■■■■■■■■■■■■■■■■■■ /run/lock
tmpfs 5.3G 0 5.3G 0% ■■■■■■■■■■■■■■■■■■■■ /sys/fs/cgroup
tmpfs 1.1G 32.0K 1.1G 0% ■■■■■■■■■■■■■■■■■■■■ /run/user/1000
Not sure what is going on. Also your df
probably defaults to df -h
since you have human-readable units and not bytes?
I'm using i3wm
, terminator
, zsh
.
Any chance that you provide a docker for me to better test? :)
Limited availability during the day, might not be quick to respond.
Can you then please run dusage --debug with the latest main?
Sure but, did you actually pushed the changes? I did build from 2a79007
$ dusage --debug
error: Found argument '--debug' which wasn't expected, or isn't valid in this context
USAGE:
dusage [FLAGS]
For more information try --help
your df probably defaults to df -h since you have human-readable units and not bytes?
Sorry about that, I use nushell
and df
is set to ^df -h | detect columns | drop column
Just for info, I'm using leftwm
, xterm
and nu
.
# Switching off nushell
$ df
Filesystem 512-blocks Used Avail %Cap Mounted on
/dev/wd0a 53674680 12654120 38336828 24% /
tmpfs 4944440 40 4944400 0% /tmp
kernfs 2 2 0 100% /kern
ptyfs 2 2 0 100% /dev/pts
procfs 8 8 0 100% /proc
tmpfs 4944440 18760 4925680 0% /var/shm
$ df -h
Filesystem Size Used Avail %Cap Mounted on
/dev/wd0a 26G 6.0G 18G 24% /
tmpfs 2.4G 20K 2.4G 0% /tmp
kernfs 1.0K 1.0K 0B 100% /kern
ptyfs 1.0K 1.0K 0B 100% /dev/pts
procfs 4.0K 4.0K 0B 100% /proc
tmpfs 2.4G 9.2M 2.3G 0% /var/shm
$ dusage
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/wd0a 204.8G 58.5G 146.2G 29% ■■■■■■■■■■■■■■■■■■■■ /
kernfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /kern
procfs 4.0K 4.0K 0 100% ■■■■■■■■■■■■■■■■■■■■ /proc
ptyfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /dev/pts
tmpfs 2.4G 20.0K 2.4G 0% ■■■■■■■■■■■■■■■■■■■■ /tmp
tmpfs 2.4G 9.2M 2.3G 0% ■■■■■■■■■■■■■■■■■■■■ /var/shm
Limited availability during the day, might not be quick to respond.
No worries, I'm also at work :)
Any chance that you provide a docker for me to better test? :)
I can help you setting-up a vm if you like but, I can test anything you'd like on bare-metal, it takes less than 2 min to build the package.
Please try to build again, it's early and I forgot to push.
Concerning the alignment, I cannot reproduce:
gnome-terminal
with zsh
and bash
:
~/git/dusage main » cargo run -- mihai@galos 24/01/22 09:06:43
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/dusage`
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/sdb1 4.6G 502.3M 4.1G 11% ■■■■■■■■■■■■■■■■■■■■ /boot
/dev/mapper/sdb5_crypt 452.7G 121.5G 331.3G 27% ■■■■■■■■■■■■■■■■■■■■ /
..
---------------------------------------------------------------------------------------------------------
~/git/dusage main » bash mihai@galos 24/01/22 09:06:47
mihai@galos:~/git/dusage$ cargo run --
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/dusage`
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/sdb1 4.6G 502.3M 4.1G 11% ■■■■■■■■■■■■■■■■■■■■ /boot
/dev/mapper/sdb5_crypt 452.7G 121.5G 331.3G 27% ■■■■■■■■■■■■■■■■■■■■ /
..
terminator
with zsh
(see above), with bash
:
~/git/dusage main » bash mihai@galos 24/01/22 08:52:08
mihai@galos:~/git/dusage$ cargo run --
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/dusage`
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/sdb1 4.6G 502.3M 4.1G 11% ■■■■■■■■■■■■■■■■■■■■ /boot
/dev/mapper/sdb5_crypt 452.7G 121.3G 331.4G 27% ■■■■■■■■■■■■■■■■■■■■ /
...
Same result in terminator
with sh
in rust:alpine3.14
docker.
/home/pin()
2022-01-24 10:09 > dusage -V
dusage 0.1.1 :: https://github.com/mihaigalos/dusage/releases/tag/0.1.1
/home/pin()
2022-01-24 10:09 > dusage --debug
/dev/wd0a blocks: 16384 size: 219851489280 available: 156925853696
tmpfs blocks: 4096 size: 2531553280 available: 2531545088
kernfs blocks: 512 size: 1024 available: 0
ptyfs blocks: 512 size: 1024 available: 0
procfs blocks: 4096 size: 4096 available: 0
tmpfs blocks: 4096 size: 2531553280 available: 2521792512
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/wd0a 204.8G 58.6G 146.1G 29% ■■■■■■■■■■■■■■■■■■■■ /
kernfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /kern
procfs 4.0K 4.0K 0 100% ■■■■■■■■■■■■■■■■■■■■ /proc
ptyfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /dev/pts
tmpfs 2.4G 8.0K 2.4G 0% ■■■■■■■■■■■■■■■■■■■■ /tmp
tmpfs 2.4G 9.3M 2.3G 0% ■■■■■■■■■■■■■■■■■■■■ /var/shm
/home/pin()
2022-01-24 10:09 > ksh
pin@mybox $ df
Filesystem 512-blocks Used Avail %Cap Mounted on
/dev/wd0a 53674680 12678972 38311976 24% /
tmpfs 4944440 16 4944424 0% /tmp
kernfs 2 2 0 100% /kern
ptyfs 2 2 0 100% /dev/pts
procfs 8 8 0 100% /proc
tmpfs 4944440 19064 4925376 0% /var/shm
pin@mybox $ dusage --debug
/dev/wd0a blocks: 16384 size: 219851489280 available: 156925837312
tmpfs blocks: 4096 size: 2531553280 available: 2531545088
kernfs blocks: 512 size: 1024 available: 0
ptyfs blocks: 512 size: 1024 available: 0
procfs blocks: 4096 size: 4096 available: 0
tmpfs blocks: 4096 size: 2531553280 available: 2522284032
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/wd0a 204.8G 58.6G 146.1G 29% ■■■■■■■■■■■■■■■■■■■■ /
kernfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /kern
procfs 4.0K 4.0K 0 100% ■■■■■■■■■■■■■■■■■■■■ /proc
ptyfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /dev/pts
tmpfs 2.4G 8.0K 2.4G 0% ■■■■■■■■■■■■■■■■■■■■ /tmp
tmpfs 2.4G 8.8M 2.3G 0% ■■■■■■■■■■■■■■■■■■■■ /var/shm
Regarding he alignment, I'm missing to see the difference, what am I really looking for?
Edit: Ah! I see it now! It could be the window/client size on the tile, even if we both are using tiling wm's. I know it might me annoying but, not a major issue. Could it be something similar to the reported in this issue, Builditluc/wiki-tui#10 ?
I used to use dust
and it reported correct disk usage. These days, I'm using dua-cli
as it gives me the possibility to choose files and folders and remove them. Attached screenshot for dua-cli
See the reported disk usage by dua-cli
.
Apparently there is a difference in the number of blocks, dusage
says /dev/wd0a blocks: 16384
, while df
reports Filesystem 512-blocks
could this be the reason?
I used to use dust and it reported correct disk usage. These days, I'm using dua-cli as it gives me the possibility to choose files and folders and remove them. Attached screenshot for dua-cli
Yes, I also use dua
. Of course, dusage report is incorrect.
Apparently there is a difference in the number of blocks,
dusage
says/dev/wd0a blocks: 16384
, whiledf
reportsFilesystem 512-blocks
could this be the reason?
Yes, also saw that. Not quite sure why this happens. This is the direct call to nix::sys::statvfs::block_size which computes the disk size.
I've pushed a commit to better differentiate between the blocks
(count) and block_size
.
This doesn't solve the problem, which I believe is related to the block_size
of -in your case- 16384.
To be honest, I'm confused. Both dua and df report different entries for /:
I've noticed that a long time ago to be honest. dua i
on a cold boot gives 6.46 GB instead of 6.93 GB as in my last screenshot.
So, I needed to install dust
, didn't I?
Now, dust
and df
report exactly the same sizes. Maybe one could check how dust is calculating this?!
What's the output of dusage --debug
now?
Can you give me ssh access to a VM - can be single-core and minimal.
Here's my public key:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGO5ed03raALENWHodgNb5PuynFs/FMAMD3wqQeBw0Yu mihai@galos
What's the output of dusage --debug now?
2022-01-24 17:17 > dusage --debug
/dev/wd0a blocks: 13418670 block_size: 16384 size: 219851489280 available: 156479389696
tmpfs blocks: 618055 block_size: 4096 size: 2531553280 available: 2531545088
kernfs blocks: 2 block_size: 512 size: 1024 available: 0
ptyfs blocks: 2 block_size: 512 size: 1024 available: 0
procfs blocks: 1 block_size: 4096 size: 4096 available: 0
tmpfs blocks: 618055 block_size: 4096 size: 2531553280 available: 2516267008
Filesystem Size Used Avail Use% Disk / INodes Mounted on
/dev/wd0a 204.8G 59.0G 145.7G 29% ■■■■■■■■■■■■■■■■■■■■ /
kernfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /kern
procfs 4.0K 4.0K 0 100% ■■■■■■■■■■■■■■■■■■■■ /proc
ptyfs 1024 1024 0 100% ■■■■■■■■■■■■■■■■■■■■ /dev/pts
tmpfs 2.4G 8.0K 2.4G 0% ■■■■■■■■■■■■■■■■■■■■ /tmp
tmpfs 2.4G 14.6M 2.3G 1% ■■■■■■■■■■■■■■■■■■■■ /var/shm
Can you give me ssh access to a VM - can be single-core and minimal.
I'd need to set one up.
Guess you need Rust >= 1.56, which means I need a mix of stable and current but, that's doable.
How long would you need and when? Maybe I could give you access to my machine.
Yes, rustc 1.58.0
would be nice, but I guess I can install it myself.
Maybe I could give you access to my machine.
Would not recommend it. Either a VM or a Docker with port forwarding.
And if you already have a Docker, then I don't need your machine since I can run it locally.
Let me first at least open an issue upstream in nix
, since I feel this is a problem there.
If we still need baremetal debugging we can then do it.
Yes, rustc 1.58.0 would be nice,...
The best you can get is 1.57.0 right now. I have 1.58.1 but I've build it from source.
...but I guess I can install it myself.
Using the native package manager, 1.57.0. Using rustup
, I wouldn't recommend it. NetBSD uses slightly different paths and somethings need linking at compile time, else the resulting binaries won't run.
Would not recommend it. Either a VM or a Docker with port forwarding.
Let me look. What about this, https://man.sr.ht/builds.sr.ht/compatibility.md#netbsd
Let me look. What about this, https://man.sr.ht/builds.sr.ht/compatibility.md#netbsd
Anything I can pull with docker pull
? - preferably something from Docker Hub?
I've tried the following in the dusage
source folder, worked as expected:
$ docker run --rm -it -v $(pwd):/src rust:alpine3.14
/ # cd /src
/src # cargo run
Can you also try it? - not sure about how the rust nix
library behaves on NetBSD
, hence the Docker.
Can you also try on another machine?
I think I'm going crazy, each one has different data:
Sorry for the late reply.
For me dust
and df
output are exactly the same. Hence, I wrote we might want to look at how dust
is doing it. That said, dust
takes the current directory as starting point.
Can you also try on another machine?
I don't have another machine with NetBSD. My other laptop runs on Void musl.
But, I can push the package to the wip (work-in-progress) repo and ask for builds and input.
not sure about how the rust nix library behaves on NetBSD
So far, I had no issues with nix
. But, I'll see what I can find.
Yes, because the blocks() * block_size()
remains as it was, so your problem didn't go away.
Not sure what is going on.
Are you on matrix?
I still think we should look at dust
.
I've just pushed a package to the wip repo, https://mail-index.netbsd.org/pkgsrc-wip-changes/2022/01/24/msg023134.html and asked two persons to report back here, if possible.
Are you on matrix?
Hopefully, I've managed to create a room :)
Hi! I was asked to test this. In my sandbox (i.e. chroot on a NetBSD system with some file systems mounted inside using https://man.netbsd.org/mount_null.8) I see some differences between df and dusage.
First example:
dusage --debug:
/bin blocks: 484512208 block_size: 16384 size: 7938248015872 free: 5003501928448 available: 4924119449600
/bin 7.2T 2.7T 4.5T 37% ■■■■■■■■■■■■■■■■■■■■ /bin
vs.
df -h:
/bin 924G 342G 573G 37% /bin
Second example:
/disk/6/archive/foreign/xsrc blocks: 600099994 block_size: 32768 size: 19664076603392 free: 1653851684864 available: 1457210949632
/disk/6/archive/foreign/xsrc 17.9T 16.4T 1.3T 92% ■■■■■■■■■■■■■■■■■■■■ /usr/xsrc
vs.
/disk/6/archive/foreign/xsrc 2.2T 2.0T 170G 92% /usr/xsrc
mount table:
/disk/6/archive/foreign/xsrc on /usr/xsrc type null (read-only, local)
/bin on /bin type null (read-only, local)
tmps, ptyfs, kernfs, nfs are fine (and have block sizes of 4096, 4096, 512 and 512 respectively).
I guess the null mount is not the problem. Doing the same outside the sandbox, I see:
/dev/dk0 blocks: 484512208 block_size: 16384 size: 7938248015872 free: 5002789339136 available: 4923406860288
/dev/dk0 7.2T 2.7T 4.5T 37% ■■■■■■■■■■■■■■■■■■■■ /
vs.
/dev/dk0 924G 342G 573G 37% /
I usually create my file systems with newfs -b 32k -f 4k ...
i.e. a block size of 32k and a fragment size of 4k (see https://man.netbsd.org/newfs.8 ), which has this factor of 8 between them. Perhaps there is confusion between these two? (just guessing) but the --debug
output has 16384 as block size, which is neither of the two.
Hope this helps...
Ah, that was with f9e362c
not reproducible on Ubuntu 20.04
x64
nor on Ubuntu server 20.04
aarch64
.
@0323pin and me will be setting up a qemu
env to reproduce.
https://man.netbsd.org/statvfs.5 says that f_blocks, f_bfree, f_bavail, and f_bresvf are in units of f_frsize. However the code in src/stats.rs multiplies them with block_size().
One Linux man page I found also documents them as being in units of f_frsize (https://www.man7.org/linux/man-pages/man3/statvfs.3.html).
Use fragment_size() instead and I guess it'll be fine.
Nice catch. I've pushed the update.
@mihaigalos Built from git-HEAD just now 😄
-Would you mind releasing a new version?
-Could you please add a Cargo.lock
file and regenerate it on every new release? (this simplifies the package updates quite a lot).
When the above is in place, I can merge the package and send you a PR to update the README.md with install instructions.
Thank you for your patience and thank you @0-wiz-0 for testing and finding the problem.
Closing this now :)
Hi @0323pin, nice to see it working!
This issue is not yet solved, since it needs at least a test to guarantee correct behavior.
Concerning Cargo.lock
- what do you mean? - do you expect it in the generated archives? or perhaps in the *.deb files?
I'll create a follow-up issue for the incorrect text wrapping in columns.
This issue is not yet solved, since it needs at least a test to guarantee correct behavior.
Not sure what you mean? I've just build it from source and it's reporting the correct values.
Concerning Cargo.lock - what do you mean?
Just run cargo generate-lockfile
inside the master directory.
Not sure what you mean? I've just build it from source and it's reporting the correct values.
Tonight I'll add a test which compares dusage
and df
outputs to be within a tolerated threshold in order to succeed. This was previously not the case.
As a general rule, a regression / bug is not fixed until there is a test for it. Future regressions are thus prevented.
Ok for the cargo generate-lockfile
.
I expect to release tonight, then. Let me know if this is inconvenient.
Let me know if this is inconvenient.
No worries. If I can't do it tonight, I will do it tomorrow morning.
Feel free to close the issue when you feel like it is done.
reopening because incompletely closed via GitHub automation.
Ok. 0.2.0
now live.
Awesome. I should have time to push the package into the main branch today.
But, I can't see the Cargo.lock
file on my phone. Is it inside the source tarball?
which file are you specifically referring to? No tarball, I have only deb
, tar.gz
, and zip
.
Seems only the dusage-*-.tar.gz
contains the Cargo.lock
.
Let's open up a new issue where you describe your problem in better detail, what do you think?
Let's open up a new issue where you describe your problem in better detail, what do you think?
No need for that, if you say it's inside the tarball (.tar.gz = tarball) ;)
oh, yeah sorry. 😅
Let me know if anything is missing in a separate issue!
I'll just fork the repo and will submit a PR soon :)