Help with deciding configuration for home server

So I currently have a home server with 3x hdds on a 5950x/x570/1070ti machine. I use it to run plex, roon music server, game servers, file hosting, etc. but Plex is the main storage hog. I currently have 2x 16tb, 1x 8tb drives, I am beginning to run out of space and I also have no redundancy on data that would be extremely time consuming to replace. This started as a small plex server i started for myself and has ballooned into a much bigger project. I have pretty decent knowledge when it comes to pc hardware and windows, I’ve been playing with and building pcs since the late 90s and I’m not afraid to try something new. However, my knowledge with raid, zfs and servers is pretty limited and when I look into it, I’m rather overwhelmed by the flood of information about the subject. so I’m coming to you guys for some help on deciding what to do next. sorry if I ramble a bit, I want to get my full use case and knowledge across so any help rendered is applicable to my use case.

I would like to run some form of redundancy with the capability of expanding to more drives later on, be that zfs, traditional raid etc. based on all my research, zfs is just the best option, however, from my understanding it requires its own os, rather than working within another os like windows or linux or running from bios, so I believe that means I’m not able to use the same machine to host zfs raid array and a normal os like windows? Am I correct in that assumption? if so are there any reliable options that would allow me to keep windows on the same hardware as the drives are connected to? I’m mainly trying to reduce complexity, cost and power usage. I also need minimal downtime if a drive fails, so i think anything other than something like raid 0 or 1 is out, but correct me if I’m wrong. As I can’t afford long rebuild times and need to be able to add more drives as i go. My friends have become quite dependent on the plex for their watching habits. I mainly care about reliability, expandability, minimal downtime, along with the least amount of hardware complexity(less machines rather than more). Drive space efficiency and speed is irrelevant to me. If a second machine is required, I am willing to go down that path, I have some old machines sitting around that I could repurpose, like a ryzen 3600x or a 4000 series i7, from my understanding a drive array doesn’t require that strong of a cpu, so those I think would be adequate.

So my plan right now is to add more drives to the 2x 16tbs, so maybe start with 4x new 16tb drives, run them in raid 0 or 1, transfer the data from the 2x existing 16tbs, wipe those drives and then add them to the array, totalling 6x 16tb drives, which would in reality be 48tb vs the current 32tb, and add more 2x 16tb as time goes on and I need more space.

The data stored on this is mountains of movies and shows ripped from blurays. this is a bit of a friend group plex I’m running that all my friends give me their physical collections to be ripped and then they all get access to the plex. So its not data that cant be replaced, but it would be insanely time consuming to redo. due to it just being video files, and less than a dozen people accessing the server, data transfer rates are really not a concern. I have 2 friends who also assist in uploading blurays they rip, so I would prefer to keep this all accessible through windows, as they do not have any experience using linux.

So if I do end up needing a second machine, how would the two pcs connect? is this all possible through my network? or would I need another way to connect these pcs? I need them to be accessible as if they were local drives, so plex doesn’t just randomly lose them and lose all the changes I’ve made as plex doesn’t always get the info right or i just simply change the covers. I have a 2.5g switch, so network data transfer rates are more than the drives will ever need.

Any help in this would be extremely helpful and thank you in advance. If I haven’t made something clear, feel free to ask.

That can be done if you resort to Unraid. They have a pretty clever redundancy system that allows you to add different drives with different sizes all in one pool. The other solution is using MergerFS + Snapraid to achieve almost the same result on another OS.

It doesn’t, it’s just a filesystem. You can use the ZFS module on any Linux OS (there’s a version for Windows but I have no clue of how to work with it). ZFS requires the drives to be all of the same size so you won’t be able to use the 8TB in the same pool as the 16TB ones.

To do that you can’t use ZFS because you can’t add drives to a pool after it has been created. Unraid allows you to do that and it uses ZFS as a base filesystem. But not for redundancy (ZFS is like EXT4 or BTRFS but also supports redundancy natively).

Don’t use a desktop Ryzen CPU for power saving because they tend to idle really high power wise and don’t get to lower C states correctly most of the time. A 4000 series i7 is also not efficient during load as the 3600x is. What kind of efficiency are you looking for?

How have you been doing it so far? Are your friends on the same network or you use a VPN to allow them to connect to your LAN and upload? I’m pretty sure your current setup can be replicated on a different Linux box.

Yeah, you could put a smaller drive in the machine your friends access to dump movies in and use whatever software you want to move them across to the Plex server, while deleting from the other machine.

First let’s add redundancy.
Also, consider adding a backup location and possibly a remote backup. (3-2-1 backup)

40TB. That’s already a lot of data. You will need to add at least one extra drive of the largest existing set of drives to have enough redundancy to recover from a drive failure (1x 16TB in your case).

As with any risk mitigating strategy you need to be clear what risks you want to protect yourself from. 1 extra drive will allow you to lose a single drive until the redundancy is fully restored. It’s possible to improve on that in a number of ways, but ultimately it means adding more drives.

Protecting against a drive failure is not a backup, which protects against a bunch of other risks (e.g. accidental file deletion).

There are a bunch of options, each with their pros and cons.

First off, Windows and Linux offer software raid.
Typically* you need at least 3 drives of identical size to start a raid5 array (which offers protection against a single drive failure); 4 drives to start a raid6 array (which offers protection against failure of two drives.
* there are some tricks in Linux to start a raid5 array with 2 drives.

As you expand the array the total size of a raid array the read performance improves, the write performance is still limited to the performance of a single drive.

In case of a failure, you replace the failed drive, and instruct via a software command to add the new drive to the array (that should not take long), but the redundancy for the full array will be calculated and written back to the drive. Independent on how much data is stored in the array. This is a very slow process, but takes place while the array is online - your friends can access plex during that time.

Raid5/raid6 arrays can be expanded (even reduced) in Linux. I assume also in Windows, but I have not tried that. Expansion is very slow (think 1+days for a 16GB drive), but takes place online (meaning your friends can use the plex collection while this is going on) and is only possible one drive at a time.

This is basically the old-school way of storing data with redundancy across multiple disks.

From here there are multiple approaches that set out to improve on this situation.

  • Unraid offers to add redundancy to a set of drives and eliminates the requirement of having identically sized drives. Performance does not scale when adding more drives.

Incorrect. It’s possible to expand zfs pools with more drives. However, best practice is to expand an existing pool with the same level of redundancy as the existing configuration. Reaching this goal can be challenging for home labbers.

  • ZFS does not implement raid, but a similar concept called raidz with the goal of faster synchronization in case of failures. zfs at this point does not allow to expand a raidz array (although this feature is expected to be available, soon). Instead it adds the concept of a “vdev”. zfs pools consist of at least one vdev and it’s possible to add any number of vdevs. Each vdev manages its own redundancy (e.g. in form of a raidz array - there are other options). While it is possible to expand a zfs pool in this way it is often not desirable for home labbers as it requires you to add multiple drives each time you want to expand an array. If you still do this you often maintain the same level of redundancy you set out with (say 33% in case of 3 drives with 1 drive as redundancy) as opposed to a reduced level of redundancy in case of a raid5 expansion (adding a drive to a 3 drive raid5 array reduces redundancy from 33% to 25%).
    Additionally, it adds a whole host of great and desirable features that you can read all about on this forum, but they’re not relevant to this discussion.

Running an array of multiple drives with redundancy is possible both in Windows and Linux. Most home labbers tend to use Linux or other open source OSs because of reliability, license, and other concerns.
I think in your case the “best option” is the one that you’re comfortable with. I recommend starting out with a windows software raid (today! this week!) to avoid losing data.
But I see this only as a first step towards a more comprehensive data management strategy. The next step should be for you to try out and get familiar with other ways of storing data.

Based on your stated requirements unraid seems to be a great solution. Dust off some of your existing old hardware to play with it and get comfortable.
I would also give TrueNas a chance. It is based on zfs.

That’s one option. And maybe not the worst. However, as you try out other options you may find that you can run plex and other software on top of unraid or truenas or similar options.
After some time you may find that transitioning your plex install to your data hosting infrastruction offers more benefits.

I think you’re already a couple of steps into a journey that will take you a lot further. It also may turn into an odyssey.

Good luck!

1 Like

You’re right, I keep confusing pools and vdevs. Can’t expand a vdev not a pool.

Have you looked into using a stand alone NAS, such as a synology, qnap, asustor, etc? They have options for single units that have 8 bays which may be a good option. The remaining operations you have could be completed using another desktop with a more traditional OS experience, such as for the game servers.

Synology software can be remotely accessed and there likely is a rather simple way to allow friend to upload files to it without having to act as the middle-man.

sorry I should have been more clear, I don’t plan on adding the 8tb to the array, that will remain a standalone drive, its kind of the dump drive for extra stuff, the stuff on there doesn’t need redundancy. all drives in the potential array will be the same size, 16tb.

thats what makes me think I might need a second machine, I’m more willing to setup a second machine than let go of windows. I’ve heard very little of the windows zfs, but I haven’t heard good things. its not just for them, its honestly for me too. I know how to use linux, but my ability to fix issues is massively dwarfed by my windows knowledge, I used to be an IT repair tech for consumers and I’ve been using windows since the mid 90s, so every problem under the sun I can usually resolve, whereas with Linux I’m mostly copying other peoples work because its beyond my knowledge for any major issues. I do a lot of things on this pc other than just letting it sit, so it will be a constant risk when trying things out. if it just sat, like a drive machine will, I wouldn’t be worried.

peak power draw is the concern for me, idle is not an issue. its not an energy savings thing, its moreso what the breaker here can handle. my main machine pulls 800watts on the regular, plus the current server machine pulling like 100-200w on the regular. this place is wired up very poorly, so half the place is on the same breaker and my roommate also has a gaming machine that pulls 600w on the regular, plus our tv and game consoles. the current breaker can handle it, but before we got a stronger breaker(previous one was very old) and I upgraded to a more efficient window ac unit and got a UPS, I would sometimes trip the breaker in the summer when me and him were both gaming with the AC on. so i don’t have a ton of headroom. I already feel like I’m flying too close to the sun with the current power draw lol.

I use HFS that gives them access to the drives on the machine and they place them in a “sort” folder and then they use anydesk to remote in and rename files, move them where they need to go and things like that. I work a lot, so they need to be able to do this stuff on their own without me.

maybe one day, that’s outside of my financial scope atm. I know that’s definitely the ideal way of doing things, but the current drive cost is already hurting financially.

I was under the impression that raid 5 & 6 were not accessible during rebuild… that really changes things, its the only reason I wanted to go with raid 0 or 1. I would totally be ok with starting with 2 redundant drives on a raid 5 or 6.

well I need plex on the strongest hardware, encoding takes a good bit of horsepower, which is why its on a 5950x/1070ti platform atm. the old i5-4460/rx480 wasn’t cutting it. I also need to be able to use handbrake to compress the raw bluray files to a more reasonable size, which also takes a lot of power.

I have, the problem with these solutions is it limits my upgrade path in the future, because I will already be at 6 raid drives, that only offers 2 more for upgrade and then im out of drive bays. I would much prefer a diy solution that can be upgraded and expanded as the need arises. as I said, im not afraid of learning new things, I just need to be pointed in the right direction. It’s like trying to build a plane with zero assistance when you’ve only ever built cars. Some knowledge overlaps, but a lot will be need to be learned and without guidance on proper info, it can get messy. I’m trying to avoid making costly mistakes along the way.


So it looks like raid 5/6 or using zfs unraid, either one on a second machine is the way to go? based on what I’m seeing from you guys. Am I correct in that? remember, expandability is something I require, drive performance doesn’t matter. Is there redundancy pitfalls with unraid?

I’m also still not clear on how I would connect these two machines, is this possible through just a network? based on my experience, windows network drive access is flakey at best, not to mention I’m not even sure how that would work with a linux machine. If theres a hardware solution outside of network, I’m definitely willing to consider it. any possible solution to that you guys could offer?