Open Source NAS with tiering?

I’d like to setup a nas for mostly storing vm’s from my virtualized environment. But is there any os out there like Dell EMC Unity? I’m thinking of the community edition since its for a home lab. But the Unity community license only allow 4TB total space to be used, and that is a bit low I think.
Ideally I have 5 x 4TB drives available, 2 x 512 GB NVMe and 2 x 512 SSD, was thinking about having 1 hot spare per drive type.
So can anyone recommend a nas os with tiering?

Check out TrueNAS.
TrueNAS Core is the tried and trusted FreeBSD-based storage appliance. iX systems made a new product with TrueNAS Scale, which is debian-based and and offers GlusterFS as a distributed network filesysten as an option if you want a storage cluster.

Open source and ZFS as the filesystem of choice. SMB,NFS, iSCSI…plenty of options to address your storage. And no licensing costs whatsoever.

Probably the most popular storage appliance right now for home users.

2 Likes

Thanks for the response.
But truenas core does not support tiering, does it? I’d like a three tier system with nvme drives for the most used parts of the vm’s, ssd for lesser used parts and hdd for stuff that is mostly static. This might be way too enterprise featured than whats available for the homelab but I know the community version of emc unity offers this, albeit with a low total available limit.

Maybe this:

On top of a standard true Nas scale install?

1 Like

First thing to ask is what problem you want to solve.

TrueNAS uses ZFS and although there is basically only one storage tier. Well, two tiers actually, if you count in special vdevs for small blocks and metadata, where HDDs perform the worst. It has several intelligent cache tiers, including memory itself. Memory>Storage.

The idea is intriguing. But what happens if you write to your pool and hit the quota? You probably won’t hit it during idle. You now got both inbound writes (which are supposed to be fast, because “fast tier”) and lots of reads too to be able to evict and write them to lower tiers. Unless there is some balancing daemon running all the time, shuffling data and producing wear, you will run into problems. And you need to provide redundancy for all storage tiers, resulting into lower storage efficiency or higher costs. And buying all flash drives with “SLOG level” of high endurance because every bit crosses each tier at least once.

With ARC running in parallel, every time you move the data, the ARC block counters go up and moves the data up in priority despite being just evicted to a lower tier. I’m unsure if this combination will be productive, although I like both approaches by themselves. Just my first thoughts without in-depth knowledge about autotier.

2 Likes

Truenas/zfs has tiering:

RAM, L2arc (SSD), disk

1 Like

Agreed, it looked to me like the op was specifically looking for an open source solution that mimic’ed specific behaviour of pre-nvme era enterprise storage SANs where a god awful amount of effort and money would be spent in trying to buffer read and writes from slow media because SSD/nvmes were incredibly expensive

These systems have now been obsoleted by decreasing cost and increased performance if nvme based solutions, unless you specifically need a huge amount of slow storage media, and that in turn has been obsoleted by cloud based storage and different archiving policies
These systems are still sold for an unbelievable amount of money to enterprises that either already bought into them and need to expand, or ones that are unwilling to put a fraction of the cost of the storage into reworking their processes to use a less expensive approach

In a homelab environment, if only to understand the basics and get some experience in understanding how it is supposed to work and/or realize the relative benefit of such a system at a VM scale when compared to organizing file systems internally to the VMS to account for different performance tiers … To each their own…

2 Likes

Yeah, you are kinda right. The storage system looks more at what blocks are more/less in use and put them in whatever tier they suit best. For instance certain parts of windows are used more than others so these blocks would be put on the ssd tier, while the rest is put on to spinning rust. The storage controller doesnt know or care what the blocks contain, just how much they are used. But maybe it’s time for me to shell out some extra dineros for some ssd’s and see what kind of deduplication I can get out of the different nas systems these days. :slight_smile:

Well then the system would write-passthrough to the lower/slower tier, and still have native HDD speed, so no botleneck here.
Then when this IO is over it will proceed to move/copy the higher tier data into the lower one.