Explain to me like I'm 5: Why would I use TrueNAS Scale over Unraid?

Howdy yall,

This is my first post here so I hope I’m not violating any rules. I’ve watched Wendell’s videos on TrueNAS Scale along with some other creators. I have an AMD EPYC 4313P that I’m setting up with ~100TB of storage for a media server/unvr/VM server. I used Unraid in my last setup and really enjoyed it but since then TrueNAS Scale has gained an enormous amount of traction.

It seems everyone that once praised Unraid is now making TrueNAS Scale content (and even finding fault / bugs with it) but not saying anything about Unraid. I just got in the motherboard / chassis today so I’m looking to get started and am not sure which I should pick. I like the idea of using ZFS over XFS but beyond that not sure why I would choose one over the other.

Thanks so much in advance :slight_smile:

Or you can go ceph directly :slight_smile:
I guess the youtubers are holding back the “we are going ceph!” content back for next year :wink:

My boss has mentioned Ceph before in passing.

What sets Ceph apart?


I thought Ceph was designed for multiple drives on multiple nodes/distributed systems/pools of storage?
Compared to XFS single drive+ single host, or ZFS’s multiple drives single host?

ZFS might be similar to unraid’s soft raid analogue?

1 Like

This is also how I understood it.

1 Like

Yeah, I wouldn’t really bother with it unless you’ve got a lot of drives. Even a large homelab setup isn’t likely to have more than say 20, which isn’t really the intended target for something like ceph or gluster.

The nice thing about unraid is it’s a lot easier to add or remove single drives. With a 6 drive RAIDZ1 you can’t just add one extra drive if you need more space. You’ve got to add another six drives. Whether or not that’s a thing depends on your specific use cases.


The only time I have a strong preference for UnRAID is when you have a pile of different disks, and you want to run them in a NAS.
I have a soft preference for UnRAID if you’re running a system as both a NAS and a VM platform.
But if you’re just running a NAS and buying sets of matched disk I’d prefer TrueNAS. It’s also been a while since I stuck my head in TrueNAS’s virtualization, so maybe it’s gotten better.
The big thing is if you’re mixing disks UnRAID is still my preference.


This is the way I see it.

Also, ZFS is obsessive about NOT serving bad data, preferring to serve no data than bad.

I use ZFS a lot at home, but it can be real restrictive; I have several pools worth of space, and can just replicate a whole array of needed.

ZFS does not allow the flexibility built in to unraid, which can loose a couple of drives, and still have Some usable data, if varying quality.

I never used Unraid, and don’t personally want the benefits it offers, even the flexible growth over time of the storage pool, but I can understand why people would, and it looks like it does well at what it does.

The worst thing with ZFS is it’s inflexibility; if you fill your chassis with drives, and have them all in the pool, then When* one dies, you need to use an external dock, or loose wires to resilver a replacement. Or remove the dead drive and replace a missing provider.
*all drives die btw.

But, I will continue to use ZFS myself…

sorry this does not help OP much


You’ve answered my question! I’m just starting my HomeLab so until I build up to having a dedicated storage server and a dedicated VM server, I’ll stick with Unraid. The flexibility of adding different disks is definitely a priority with my use case.


This. I had no idea you had to add drives like that. Thank you very much for pointing this out.

What are you talking about?? This is very helpful. I appreciate you being honest about the shortcomings of ZFS despite the fact that you use it. Dying on the cross for a product gives false confidence to those who follow your recommendation. For me that flexibility unraid provides is enough to push me towards it for the time being.


You can add single drives to ZFS mirrors, but not RAIDZ’s. I’ve heard that the ZFS devs are working on some new expansion features which might help lessen this limitation, but I wouldn’t plan on that being available any time soon.


I was half joking :slight_smile: . But I kind of guess we will see a ceph migration video, the next time Linus has problems/runs out of space with his storage system :slight_smile:

I do however run a small 2 node ceph cluster (not an officially recommended setup, but nothing stops you from running it even on a single node. It just doesn’t offer you the benefits of ceph and performance is worse that just a HW raid, or I guess ZFS), to evaluate cephfs filesystem for some BCP case where we can’t use NFS as a shared filesystem, because some external vendor SW doesn’t work well together with NFS.

1 Like

it is not so much a ‘limitation’ as it is a fundamental design difference.

UnRAID has very minimal ‘redundancy’ and very minimal actual data verification steps. BUT it can take that pile of HDs sitting in your back room and make a usable storage tank out of them.

ZFS expects you to provide it the perfect perfectly capable hardware set up so that it won’t bog down user access by doing things like SCRUBs and logging writes.

if UnRaid is a bucket, ZFS is a DHS evidence locker.


The fact UnRaid requires a physical USB stick for a boot device for a license is 2 points right there that make it a non-starter for me

Non-redundant USB based boot device? No.

Licenses? No.

Also means its impossible to virtualize, I assume.

And then there is HDFS, a toddler high on speed:
Provide it with multiple fully functional storage servers and stack a pile of Java on top, it will then stick a finger in its nose, throw files and copies of files round wildly because it wants to go fast!

1 Like

They almost put “oops” in the name, because they knew.

1 Like

Do you prefer to go on dates with Asians or Latinas?

Preferences and tastes of the end user, simplified to a great extent. :slight_smile:



Found the fullstack developer.


not exactly…

Unraid = i have a box of random hard drives and a 10 generation old PC to put them in. i want to store stuff i download from … websites… if its all gone tomorrow due to lightning or tornado, thats for the best really.

ZFS = i am a data hoarder, and or i have 30 years of family pictures, archives from 7 previous phones, my wife runs a wedding photo side job, and the local church has me do their church camp I.T. stuff, for the last 22 year. Also i prepare tax statements for myself, my family, and 17 friends.