First Home NAS Build

I really like both of them. Unraid is significantly more capable for some things but both are great. You can run docker containers on OpenMediaVault as well as virtualize, however probably requires more getting your hands dirty than it does with unraid.

Unraid ties licensing to the USB drive so it requires that the USB drive has a unique serial. Not all drives have this. You can test this with their bootable media tool ahead of time though. I dont see how they would fail quickly since you write to it very little. You can also back up the USB live to have redundant copies if this is a major concern. I dont see how the USB would fail unless it was just a bad USB that would have failed otherwise. The consumed ram by unraid is only a few hundred MB

This is a big depends. Unraid is NOT a raid so it relies on single disk performance, which is why it has a cache system. ZFS obviously has an advantage there since you can effectively stripe vdevs utilizing the throughput of more than one disk. Theres a bunch of scenarios where unraid could be faster than freenas in terms of disk performance but I would say they dont matter a whole lot for your usage.

I dont think so. Windows server has its own advantages but I think they dont apply here. You can run a VM of server on unraid without taking a noticeable performance hit.

For unraid? absolutely. Especially if you’re going to run VMs because they can live in cache instead of hitting your disks.

Its a big depends on how much data you are going to need to write and how many VMs you want to have. The up side to unraid is you can have multiple cache disks and add to them as you go.

It should be, but based on the thread you posted it looks like plex is having issues with it. Maybe look at something like a used 1050 instead.

You’ll need 10GBE on both sides and maybe a switch if you want to not need a direct connection. Also you would need to run things on a cache or start utilizing software/hardware raids to saturate that 10GB connection. A single disk is going to be slower.

As far as what to get. I would avoid the cheap ones because I dont think they’re well supported in linux yet. Intel seems to be the go to here.


I should go into more detail about unraid because its not perfect either, but I dont have a ton of time right now.

1 Like

You have a bunch of question but regarding transcoding…

I see that you already decided against FreeNAS, but since you are still choosing an OS, you should also know that FreeNAS does not support GPU acceleration. Stick to Linux for GPU transcoding.

The guys at serverbuilds.net strongly recommend 6th gen intel quick sync or later (you have already considered 9th gen) as a cheap & power efficient transcoder for plex/emby/jellyfin link, supporting up to 21 streams!

1 Like

So in terms on data I’m looking at a 10-12TB drive. Not really sure how I quantify the amount of data I’ll need to read and write? We’re talking mostly media (video) here and most of them are no larger than 2GB each. I was think of just a cheap SSD or NVME of 250GB?

I’m slowly getting there in terms of picking out the components. One point I’m stumped on is the CPU. I initially thought it had to be Intel for Quick Sync or otherwise any CPU with a Nvidia GPU but apparently that’s only if you need hardware transcoding? Software transcoding is also an option and the Ryzen CPUs have the raw performance to achieve that? My thinking was initially to get an i3 9100 which is a quad core and use the iGPU to begin with to see how I get on then expand from there. However apparently the i3 is considered a minimum specification for Plex servers so may not hold up in a couple years time? I’ve been looking into some Ryzen CPUs as options given their comparatively low prices for high core counts which could come in handy for VMs? I only plan on 1 VM (Windows for Blue Iris) at the moment but as I get more into the NAS experience I may find something I else I need a VM for. I was looking at a Ryzen 5 3600 or a Ryzen 7 2600X. The 3600 is available for slightly more than the i3 9100 and the 2700X is just bordering on too expensive, older Zen+ and I think the higher TDP also makes me think it may not be a good option? For CPU I’m trying to keep my budget about £150/$190 so maybe there’s something you could recommend?

I should probably go ahead and explain the quirkyness that is unraid and cache. When you use a cache drive in unraid you can choose which ‘shares’ will utilize cache. Shares are used as your directory system, even if they arent actually ‘shared’ to the network. When you write anything to unraid it will put it on cache assuming you have it. This is where it gets a little weird… There is a scheduled application called the mover, which determines if data gets flushed to your disks or stays in cache based on settings you choose with your ‘shares’.

You will have a ‘share’ for your virtual machines and one for docker containers. This share will probably always live in cache. So your cache then needs to be larger than you would make the virtual disks of your VMs, and the same for the data you might store in containers. You can organize and prioritize your other ‘shares’ to your will.

I think maybe you might not quite get whats going on when you ‘transcode’ or why its done. When you want to play a file you have over the network but you dont have enough bandwidth, then you need to reduce the bitrate of that file to suit the network. What happens then is the file is decoded, and then encoded using different settings and codecs to reduce its bitrate. Thus it is 'trans’coded. This step is completely unnecessary when you have sufficient bandwidth for the file you’re playing back.

My personal server uses a 3600x but the i3 probably makes sense for you given it has an iGPU. My gut is to recommend a 3300x for its pricepoint but you’d still need a GPU with any of the AMD options you specified. A 3200g would possibly be viable but I’m not certain as to the support for transcoding down the road should you need it.

When you run out of headroom on either platform you’re probably going to be buying a new board and CPU anyway since AM4 is nearing its end and intel doesnt stick with compatibility on their sockets very long.

So for example if I copied over 500GB of media it would get stored in the cache? The videos from my plex streams I don’t need cached I don’t think.
Also a quick question about network transfer speeds what sort of length of time would I be looking at to copy 500GB of video files to the plex server over 1gbs network? Slower than it takes to transfer to an external USB 3 HDD? By how much?

Should I be testing my bandwidth? How do I do this?

I understand I am likely to need a new board when I look at CPU upgrade in the future but I would like to buy a CPU now that isn’t going to be struggling in 1-2 years time.

Will 4 cores be plenty to run plex and a Windows VM running Blue Iris 24/7? I’m happy to stretch the budget for a little more to get 6 cores if it will be useful to me.

Ok I think I’ve settled on the final build:
uk.pcpartpicker.com/list/RPMLQq

Back on an Intel CPU now. I weighed up the upfront cost and the ongoing electricity costs of the 3300X and a dedicated GPU and concluded to go Intel with iGPU albeit a more powerful Intel (still for less than the cost of a 3300X and GPU)

I know I don’t need a 550W PSU but it was the cheapest with a good efficiency rating (80+ Gold) and semi modular from a trusted brand.

Would appreciate it if someone could look it over and let me know if there’s something I’ve not considered.

Thanks

First thing, if there any reason you’ve not gone for an M.2 drive? A lot more convenient not having to put a 2.5" drive anywhere (says the guy with his NAS’s boot drive taped to the PSU). I’d also probably look into spending like £20 more on a better PSU.

One thing that hasn’t been discussed yet is if you actually plan to store your transcoded files. Personally, I have Plex’s transcode directory pointing to /dev/shm, so nothing is stored permanently, and it means I don’t have loads of writes to my boot drive/don’t need a scratch disk. The obvious downside is that you have to transcode files as many times as they’re being watched, but if you’re watching locally, you should just be using direct stream.

In my experience, running ZFS on Ubuntu, Plex is pretty lightweight. I have an i5-4590T, and use software encoding because, as has been mentioned by @blooper98, Quick Sync is pretty bad pre-Skylake. I share my library with quite a few of my mates, and haven’t had a problem when 4 people have been streaming at the same time. Once the transcode buffer is full (have mine set to a few minutes), transcoding in real time isn’t especially taxing.

It could be worth looking at T-series chips, either new or used, if you’re worried about power consumption. Basically Intel’s desktop series with the power envelope of their mobile SKUs.


I’ve upped your trust level, so you should be able to post links properly now.

1 Like

it depends on settings. The mover will either keep it in cache or move it to disk based on what your share is set up for.

This depends on the speed of the disk. I dont think it will be too long but I cant tell you because I have no idea how fast your disk is.

Not unless you’re having issues. You said you were going to play back over the local network so my point is you never need to transcode.

2 cores would likely be enough. You arent doing that much here.

I’m not a fan of that PSU but I know they’re in short supply right now. I dont know what you have available to you so if thats it, then go for it. I dont see any issues with the build otherwise. I would probably go with a single 8GB stick of ram which you can double down the road if you want.

Do you mean the disk inside my PC or the disc that I will be putting in the NAS?
I’m putting a WD Red 10TB in it.

What is wrong with this PSU? If you wanted to, you can go to PC Part Picker UK https://uk.pcpartpicker.com/ and look at the PSU section it will give the UK component stores and the prices.

I don’t imagine needing more than 8GB I mean the Synology boxes come with 2-4GB and as you’ve said I could probably even run my set up on a dual core.
I want to get 2 sticks of RAM for the dual channel. I can always replace the sticks down the road. The CPU is more of a hassle so I’m making sure I pick out the best based on price and performance for the tasks.

the disk in the NAS of course.

I havent dug into them much but not all PSUs are created equal with their ripple, OCP, and general build quality. I know this one to be a solid unit. https://uk.pcpartpicker.com/product/3H2bt6/seasonic-focus-550-w-80-gold-certified-fully-modular-atx-power-supply-focus-gx-550

not at first you wont, but maybe down the road once you’ve realized what you want to do with the nas and begin running more and more virtual machines.

dual channel is nice to have but nothing you’re currently talking about doing with this system would directly benefit from it. Replacing sticks is more costly in the long run than just buying the same thing. Its your call if you really want that, but I dont believe you’d notice a bit of difference.

1 Like

Allow me to pontificate to you with my anecdotes.

When I first started my data hoarding journey, I had a 1TB external drive. It was one of those 2.5" enclosures. My friend wanted one that didnt require power and traded me a 3TB one for it. I used that for a good long while until I bought my first actual NAS and put 2x 4TB drives in it. It was a synology DS216j. I was living large with my nas solution, until it started to get full. So I bought 2 more 4TB drives and build a freenas machine. I was now rocking 8TB of storage in 2 mirror vdevs. Then I started to wonder what might happen to all this data if I lost it… I had built a new computer so I just used my old one as a new NAS and started using Unraid along side freenas. I shucked a couple of 8TB drives for unraid and now I had 6 total drives with 16TB of capacity. I began backing up my data to freenas. Over time I grew annoyed with freenas and its shortcomings and switched to proxmox. I also grew my services to include web servers, reverse proxy, torrenting, game servers, pihole, and more…

I eventually rebuilt the unraid server to have 32GB of ram to utilize as many services as my heart desired.

Which I then back up to a ZFS array running on proxmox.

Its a special tier of hoarding autism I have going on. Send help. :wink:

All of this is to say, you will outgrow your solution. I think everyone does.

3 Likes

Damn, I think I’m on the Adubs path of migrating through a few different setups until I find my Goldilocks setup. I guess part of the fun is the journey, am I right?

1 Like

I’m trying out lots of solutions and tbh, the more I learn, the more I want to just install ubuntu or centos and call it a day. rolling your own solution seems to give the most freedom and the least hassle IMO… at least after the initial set up.

1 Like

That is pretty much what I did. I just installed xubuntu 20.04 and have 1 large drive in my system but my redundancy setup is a bit different. I already have most of my files on my HTPC and then a second backup on an external hard drive so I never really got into the whole raid setup. In terms of streaming I use a program called jellyfin which is a fork of emby which is a fork of plex. It works pretty well I do all my encoding on my server and transfer a copy of my files to my HTPC when I am done using samba. It works well for me so far though I plan to expand what my server dose soon.

Yeah you are right I will likely outgrow whatever I pick at some point I just feel like I’m going to kick myself in a year or 2 time for not going with the right CPU, RAM, etc first time.

Speaking of schucking I’ve never done this before but have been looking how to do it as Western Digital are having a sale on the WD Elements drives so I can get a 12TB for significantly cheaper than a WD Red 10TB. The drives in these are effectively Reds right? CMR? How is the noise level some said theirs clunks a fair bit from something called wear levelling.

I would stop worrying about the future because future proofing is an exercise in futility.

I don’t know about the elements. I did the easystores which were white label running the same firmware as reds. I believe they are CMR. I don’t really notice much noise out of them.

It was this guy who did a tear down of the Elements drives and found they were basically white label reds (He does a lot of schucking of easystores too)

For someone who is planning for now to just use 1 hard drive is it advisable to use a shucked drive? People use these in arrays mostly from what I gather so reliability maybe isn’t a overall decider.

What is this modification I will have to do to the drive? One option is to use Molex to SATA convertor but have read it can lead to issues of data loss (even fires!)

1 Like

The 3.3v pin mod is what you need to do for “some” shucked drives. This depends on the drive and the powersupply. I just removed those pins completely with a razor blade. You can also tape over them. You dont need 3.3v to make your disks work ever. Id say shucking makes sense where you are buying multiple disks at a time and it would mean that you could afford to have a cold or hot spare.

For single disks you might not want to do that. Your call. They seem reliable to me. I know in the US, my warranty isnt void for opening the enclosure so long as I didnt damage it. For you, this probably isnt the case.

I would say dont count out seagate here either. Ever since WD dismantled HGST, I’ve noticed a clear and downward trend in the quality of the company itself. They keep doing things that frankly dont make sense to me and I believe they are headed in the wrong direction.

I’m actually really tempted to get 2 12TB Essentials drives given how cheap they are and the 1 time WD 10% discount. Just had a read about Unraid and parity so if I got 2 12Tb drives and made 1 parity I could later down the line add more drives (up to a max of 12TB each since data drives cannot be larger than parity) and if any one of them failed the data would be safe?

Thats correct, however you should consider a second parity drive in the long term because its not uncommon for the parity to fail during a resilvering process because both drives are identical in age. It takes a while for the whole process to complete so during that window you are vulnerable to disk failure.

Also bit rot is a real issue, so while the parity might be there, it doesnt mean the data is correct. Unraid typically writes over parity first, trusting the data on disk implicitly. This can lead to incorrect parity data in the long term. Rare but not impossible.

Ultimately a backup of your data is better than relying on the array to maintain it. “RAID is not a backup” still holds true for unraid.

1 Like