Return to

Tiered Storage but with file size as condition. Does it exists?

I am a linux beginner(only have a ubuntu NAS with universalmediaplayer and a network share running) and i want to build a new NAS in the future.
Well, the plan is to run 4x2TB Intel 600p(or 605p when they come out) and 9x10TB Seagate enterprise HDD´s and probably a 900p as write cache.
All of them should appear as one network drive(so far so good).
But i want to sort the files by size automatically and store them to either SSD or HDD because the NAS will store a LOT(1million+) of small files (5kb -5mb) and then a good amount <50k large files (100mb+).
Well, i made a picture to illustrate this idea :slight_smile:

Well, is there a solution to this? Having a network folder only write to the cache-drive and then move the files accordingly to SSD or HDD pools and still let them appear as the same folder structure?
I am capable of a bit python and could write a script to move files from one folder to another (would probably be a 10-liner) but i have no idea if/how i could do this on a disk/logical volume level.

I don´t care which operation system is used if you have an idea(as long as it is free or one-time purchase(unraid would be ok))

I haven’t done it myself but I think that it should be possible to build something like this with MergerFS.

Take a look at the documentation at trapexit/mergerfs on github (can’t include links), especially the “tiered caching” section.

1 Like

Thank you, that actually is EXACTLY what i needed.

1 Like

bcache is a more common solution and is filesystem agnostic. The file size switch you’re looking for is:

echo 5M > /sys/block/bcache0/bcache/sequential_cutoff

Thank you for the suggestion, but i don´t see how this solves the “Sorting” of issue, because i want the files to be permanently stay at ssd/hdd and wouldn´t the cache move them after time? But i guess the topic can be closed since mergerfs is perfectly solving the problem and i have maximum control over the flow of files :D.

1 Like