Dedicated Server with 4x6tb drives. What are my options?

Hello everyone!

I have a dedicated server with 4 6tb Drives and i’d wondering what setup people would recommend.

The server has an i7, 32gb of ram (non-ECC) and will run Debian 9. My primary usage will be media streaming and bluray rip storage. It will also be an email server with one or two accounts and a file server via NextCloud. It won’t see more than 4 or 5 users.

Ideally I’d prefer something that can offer redundancy and performance (although my server load is pretty light). Space isn’t a top priority, but i’d like to get at least 12 usable.

It seems like raidz2 is one way to go since i’d end up with 12tb usable but it seems to lack write performance but can loose 2 drives. Raid10 seems like another way to go since it has better performance but only one disk can fail in the array.

raidz (raid5) seems like a complete no-go from what I’ve read.

Love to hear what people would suggest!

RAID falls into “Choose One” from the available disk configurations, there’s not a whole lot to discuss.

You haven’t mentioned the age of your hardware, but as they’re 6TB disks I am going to take the assumption this is reasonably new unstressed gear. If you’re willing to keep an eye on it (or set up some email monitoring) I would recommend that single disk redundancy is enough for a 4 disk array.

With “big” (>4TB) disks the conventional wisdom however changes as you have to consider the read error rate of your disk Vs the size, to determine if during a re-silver you will statistically get a read error. Typically I would recommend using a “smart” filesystem to approach this problem, but you stated you are using D9 so ZFS may not be a fun experience for you (I don’t know what support is like, it varies wildly distro to distro).

If ZFS spooks you then you’re gonna be looking at the Linux RAID drivers which create MD arrays (google “Linux create RAID”), and based on the size of your disks, if you can’t find solid data on their read error rate I would err on the side of caution and build a RAID-6 array.

RAID-10, stripe-mirror is the only redundant option engineered with a performance increase in mind. But I wouldn’t recommend this as it half’s your capacity and still only provides single disk redundancy. RAID-6 might read a bit faster than RAID-5 but will probably have a similar write performance. If high performance is your goal then using an SSD caching drive is probably the best option; 250GB SSD’s can be had for pennies these days. But note if you’re coaching writes, you’ll need redundancy in the cache to guarantee never loosing data.

Final Note, whatever anyone tells you: DON’T USE HARDWARE RAID! The benefits are outweighed by the risks. I personally can attest to having motherboard RAID controllers “forget” their configuration, and when rebuilding the array loosing everything. Unless you’re working with enterprise gear the hardware RAID controllers in motherboards are a bolt on feature that shouldn’t be trusted with your data!

TL;DR:

  1. Use a smart filesystem like ZFS if you can
  2. If unsure use RAID-6 on 6TB drives
  3. Don’t rely on RAID for performance, use an SSD cache.
  4. Don’t Use Hardware RAID

Thanks for the detailed reply!

Yea totally forgot to mention this is a dedicated server from Hetzner hosting. So hardware should be in good condition, but it is an unknown. I plan to check the disks out before putting any meaningful data on them. They claim they will replace anything that’s detective.

ZFS on D9 is not a problem, it’s in the official repos. Bit of a learning curve, but there’s good resources out there. With ZFS isn’t Raid6 the same as raidz2? Or i’m having trouble seeing the differences between the two.

yes raid Z2 is the ZFS equivalent of raid6.

I have been running a ZFS pool for nearly 2 years with 0 issues, it might seem daunting at first, but ZFS was designed to be easy to administer and really is.

If you go the ZFS root you’ll find the online documentation is very comprehensive. But usually the BSD guys are more helpful than the Linux guys as ZFS has been around longer on BSD.

This is sorta a noobish question but I can’t seem to find a straight answer anywhere:

With ZFS do you install Debian on just one drive (sda) and then format the other drivers after the install and then create the pool via a command such as:

sudo zpool create -f [pool name] raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd

Or should creation of the pool be done before installing Debian somehow? (via rescue or recovery environment in my case)

It depends on whether you want your rootfs to be the ZFS pool, or whether you want to have a drive with ext or whatever and also have a data ZFS pool.

For the second option, just install normally onto that drive, then install ZoL, then create the pool.

For having a ZFS rootfs, it is a bit more complicated-

General advice would be to get a separate device to boot off. This stands for all RAID arrays not just ZFS.

If trying to install Debian on the ZFS volume you will first need to create a pool with all desired devices then create a ZFS volume inside that (normal procedure) and add it as your / mount. The tricky part (which I have never attempted) is actually getting it to boot. ZFS does not integrate with systemd 100% nicely so… the actual boot process could have a laundry list of errors to fix before you have a working system. The other point of note is while there is a grub module for ZFS you will need to double check manually that it is installed so that grub can fetch the kernel from /boot.

I used this scary looking readme for some information in this post, however it targets Ubuntu.

TL;DR: Your mileage may vary and might kill your cat / burn your house down. If possible ask Hetzner to install a boot SSD in the system, I am assuming you won the machine in a server auction, I think they usually can do small upgrades for a fee.

Thank you both for the replies. Seems like ZFS on root is a pretty tricky. And getting a SSD would have simplified things. However after contacting Hetzner about the possibility of an upgrade, all the bays in my server are filled up (there’s only 4 bays).

So after a lot of googling I found this neat little script. It worked like a charm for me. I have all 4 of my drives in a Zpool with a bootable install of Debian 9.

that’s great to hear, I hope it serves you well for the coming years.

Now that you have a ZFS there’s a few “Nice to have”'s to consider.

You can set up snapshots for practically free and use it as a crude file revisioning (has saved my ass a couple of times)

Also you should set up scrub to run weekly. Add /usr/bin/zpool scrub <poolname> to your cron to run weekly. This will make sure your dataset stays healthy and report any errors

Also out of curiosity what did your final capacity come out as?

Being on linux. Im using BTRFS raid 1. Plain only one drive tolerant and its worked for like 5 years.

Plus is it has different sized drives and I need to add a new drive and balance it again soon.
Total devices 5 FS bytes used 7.51TiB
devid 1 size 2.73TiB used 2.52TiB path /dev/sdd
devid 5 size 3.64TiB used 3.43TiB path /dev/sdc
devid 6 size 5.46TiB used 5.24TiB path /dev/sde
devid 7 size 2.73TiB used 2.52TiB path /dev/sdg
devid 8 size 1.82TiB used 1.60TiB path /dev/sdb

It mostly games and video content. It does not need to be super fast.

I have reinstalled the OS maybe a dozen times and it just posts as a BTRFS pool.

Yea i’ll have to look into snapshots. Might not be a bad idea. I usually don’t delete things accidentally, but If I do, snapshots would be nice to have.

I already found this out in my research about ZFS, but thanks for the pro-tip :slight_smile:

The total capacity is about 11tb usable. zpool list reports 21tb but i’m pretty sure this is the ‘raw’ amount of storage in the pool not accounting for the parity disks.

See that’s the thing with btrfs, I hear some people who have used it for years and never had a problem and say it’s solid as a rock. But then you read Debian’s status on btrfs and it sorta scares me off. I have no doubt it’s the future, so we’ll a few years down the road when I upgrade servers again.