trapexit / mergerfs

a featureful union filesystem

Home Page:http://spawn.link

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can Mergerfs split folder into multiple drives

54601 opened this issue · comments

commented

I use folder to categorize my movies. Since Mergerfs doesn't split files and apparently also don't split files in the same folder into different drives, the max capacity of each category is limited by the capacity of a single drive.

I wish Mergerfs could store files into a different drive if the current drive used by its parent folder is full.

I don't really understand what you're claiming or asking. The fundamental purpose of mergerfs is to create a union of branches. That's its most basic function. There is no restriction to create files on a single branch. it will select the branch based on the policy which you have full control over. See the docs.

commented

I don't really understand what you're claiming or asking. The fundamental purpose of mergerfs is to create a union of branches. That's its most basic function. There is no restriction to create files on a single branch. it will select the branch based on the policy which you have full control over. See the docs.

I merged 3 6TB drives on /pool using Mergerfs. In /pool, I can create as many as files as I like as long as the combined storage they take is less than 18TB. However, if I create a subfolder in /pool, /pool/Movie for example, the /pool/Movie will be reported as fulled if the total files it holds is more than 6TB instead of 18TB. I think it may be possible the allow /pool/Movie to have the same capacity as /pool.

Because that's how you've configured mergerfs to behave.

You must change the create policy: https://github.com/trapexit/mergerfs#functions-categories-and-policies

https://github.com/trapexit/mergerfs#policy-descriptions

commented

Because that's how you've configured mergerfs to behave.

You must change the create policy: https://github.com/trapexit/mergerfs#functions-categories-and-policies

https://github.com/trapexit/mergerfs#policy-descriptions

I'm sorry, I could have misunderstand the docs. My set up is:
/mnt/FL* /pool fuse.mergerfs cache.files=partial,dropcacheonclose=true,category.create=mfs,moveonenospc=true 0 0
According to my understanding, category.create=mfs should choose a branch with the most free space when creating a file, but I'm still countered by 'no space left on device' when trying to create a folder in /pool/Movie. Is my settings wrong?

Are you positive you've got it set? Have you checked the runtime value to confirm? The behaviors are exactly as described in the docs. It chooses the branch with the most free space after considering the filters.

Please provide all the details requested in the support ticket template and confirm the runtime settings using getfattr -d /pool/.mergerfs

commented

Are you positive you've got it set? Have you checked the runtime value to confirm? The behaviors are exactly as described in the docs. It chooses the branch with the most free space after considering the filters.

Please provide all the details requested in the support ticket template and confirm the runtime settings using getfattr -d /pool/.mergerfs

I'm terribly sorry for the confussion I caused. It appeared that actually two of my drives each has a same name folder, causing mergerfs to be confused, and as a result, the settings in \etc\fstab were never in effect. After deleting one of the folder, the whole thing works brillantly. I must have write that folder directly into one of the drive and forgot about that.

I'm not sure I understand what you mean. mergerfs shouldn't "be confused" with that setup. It works just like it is described in the docs.

As for fstab. Yes, you have to remount the filesystem for the settings to take effect. That's true of pretty much all filesystems. There is a runtime API but if you change /etc/fstab it doesn't automatically update.

commented

I'm not sure I understand what you mean. mergerfs shouldn't "be confused" with that setup. It works just like it is described in the docs.

As for fstab. Yes, you have to remount the filesystem for the settings to take effect. That's true of pretty much all filesystems. There is a runtime API but if you change /etc/fstab it doesn't automatically update.

This is how I solved the problem:
Fact:
My setup is \mnt\FL1:\mnt\FL2:\mnt\FL3 => \pool
However, FL1 and 3 both contains 'others' folder. During mounting, FL3 cannot be mounted to \pool. After deleting \mnt\FL3\others, the mount is working properly.
My theory:
I believe that after I mounted the drives to pool, I accidentally created 'others' directly in \mnt\FL3. I think \pool can still be accessed after that. However, when I altered the settings in \etc\fstab and run 'sudo systemctl daemon-reload', the mergerfs might fail silently and the new settings never applied, including the important 'create=mfs'. As a result, the pool was reported as fulled when a drive was filled up, as indicated by my first setup, instead of the modified one.

I don't know if my guesses are correct. But I am really thankful for your patience and invaluable help!

Screenshot from 2023-11-10 09-00-44

ok so I hit a snag I have made a new pool with 24x 4tb HDD's to replace my old pool of 24x 3Tb drives

old pool was using mfs

and I wanted new pool to use eplfs

how ever because rsync creates all the folder paths when it first starts all the data is trying to go onto the first drive and not overflowing onto the other disks

I am not wanting all the files to be spread out on every disk so I can actually have drives not being used to spin down what options Could I try ? , apart from manually moving ~4tb chunks to each drive

/mnt/SnapRaidArray/* /SnapRaidArray fuse.mergerfs comment=x-gvfs-show,rw,defaults,allow_other,use_ino,cache.files=partial,dropcacheonclose=true,allow_other,category.create=mfs,moveonenospc=true,minfreespace=50G,xattr=passthrough,fsname=SnapRaidArray 0 0
/mnt/SnapRaidArrayNew/* /SnapRaidArrayNew fuse.mergerfs comment=x-gvfs-show,rw,defaults,allow_other,use_ino,cache.files=partial,dropcacheonclose=true,allow_other,category.create=eplfs,moveonenospc=true,minfreespace=50G,xattr=passthrough,fsname=SnapRaidArrayNew 0 0

or should I just use lfs ?

Please don't comment on old threads. If you aren't reporting issues please use the Discussions board.