Torn Between ZFS and BTRFS for a new general purpose storage pool, need advice

The question should be put like this:

  • Q: Can you run ZFS?
  • A1: Yes
  • Then run ZFS
  • A2: No
  • Then run BTRFS*

*Unless you want to run anything above RAID mirror, then run md.

I never heard a single person besides the not-Linux Linus being happy with Unraid. Most people use it for a while, then drop off from it. Besides people like me, I never heard people complain about Proxmox. Proxmox is a good choice and is more used in the enterprise than unraid. And you can control it via CLI, you’re not limited to the GUI. Ubuntu is fine too, but even less preferable than proxmox (the later is running actual debian).


For the storage, I don’t know how you’d configure your pool, but I agree with Dutch_Master, getting a second PC that runs on peanuts and using that as a NAS would be preferable. I wouldn’t go as far as getting a 6 bay one, just one with 4 bays and 2 or 3 nvme drives should do. You can run TrueNAS Core or Ubuntu on it (or Proxmox).

Anyway, I would put those 4 drives in stripped mirrors and if your motherboard supports it, up the RAM. Your motherboard has 8 slots, so I would buy another kit of 4 sticks. ZFS can deal with the caching in L1Arc with its RAM.

I would not make an L2ARC, you are not really going to need that much capacity for caching reads. What I would do if I had the space to add the additional disks, would be to add a metadata special device:

This basically functions like an index for the entirety of the pool, so ZFS doesn’t have to spin the rust to find stuff, it knows each file’s exact locations, so it should improve access times.

I would not go for a SLOG either. I ran a Samba server on a core 2 duo with 8 GB of RAM on 2x (md) RAID1 pools for 70 people. Later I moved to a Celeron dual core HP ProLiant MicroServer gen8 (because that core 2 duo was running on borrowed time) with the same amount of RAM (only DDR3 this time), with newer drives in RAID10 (still md). Do you think minecraft will really benefit from all the caching and stuff? I highly doubt it, unless you got a few TB of data that are being accessed frequently. I don’t think minecraft can benefit from a large l2arc IMO.

Correct.

You get 4x the reads and 2x the writes. If you go with 2x disks in RAID1, you get 2x the read, resiliency and no write increase. If you got with 2x disks in RAID0, you get 2x read and 2x rights. Combine them and you get 4x reads and 2x writes and resiliency.

It’s just a very good file system, nothing else. How you use it matters more. But it does have nice features, like compression, which basically means free storage. Use zstd.

Besides the L2ARC (which is just caching), I am not aware of any option for tiered storage, which would be more useful.

In your config, I would do the spinning rust for data and make a separate pool from the SSDs and place the OS boot vdrives there. Well, I’d be using ZFS, so I’d just make a FS for each, like I already do:

$ zfs list
NAME                                 USED  AVAIL     REFER  MOUNTPOINT
sgiw                                 144G  8.83T       96K  none
sgiw/kvm                             144G  8.83T       96K  none
zroot                                772G   150G       96K  none
zroot/ROOT                          19.6G   150G       96K  none
zroot/kvm                            751G   150G       96K  none
zroot/lxd                            960K   150G       96K  legacy

zroot is my root mirror for my system, on which I put the OS drives and sgiw is my spinning rust. I cut out the cruft from the output, I got like 30 lines, I wanted to keep it short. Under zroot/kvm, I have my OS file systems, under sgiw/kvm, I have their data drives. As you can see, I currently have more stuff on zroot than sgiw, but that’s going to change.

^ This. Really no point in having a ZIL.

1 Like