I’m looking into building my first “real” server and have been running through a crash course on server hardware. My goals are to run unRaid, or the like, and run Pfsense, FreeNAS, an encoding server, a Plex server, and possibly a TS and/or game servers for LANs. I won’t be using Plex, the encoding or TS/gaming servers much at all so they’re not a 24/7 operation and I’m mainly the only person using them. I plan on using ZFS2? (like raid6) so 5 drives and I have a 120GB SSD I can throw in as a cache drive (I really want to use it for the dumping point of incoming data and then passing it onto the array). I also want Link Aggregation for at least 2Gbs of bandwidth to the switch I need to pick out as well.
The hardware I’m currently looking at is:
-
1 or 2 Xeon E5-2670s ($60 each so why not two? lol)
-
32GB DDR3 ECC RDIMMS (due to the 1GB/1TB as I plan on having a large data pool over 15GBs eventually)
-
SuperMicro MBD-X9DRL-IF-B ATX, this is just a dual socket ATX I found that’s in stock and has features I require but I’d love to get a motherboard with more memory slots or gigabit ports.
-
Says it’s 12x10" ATX, I have a NZXT Source 210 laying around.
-
Wanting a failover psu unless they’re too expensive, will be running on UPS. Haven’t checked prices yet.
So my questions are as follows:
-
How much hardware power do you think I actually need? Is this overkill? The 2670s are $60 aka dirt cheap and the dual socket motherboards are the same price as the single sockets so right now it’s “why not???”.
-
Can the 4 SCU SATAs be used in conjunction with the AHCI SATAs? ie: a ZFS2 pool and have 3 drives on AHCI controller and 2 drives on the SCU. I read that the SCU just uses 4 of the cpu’s pcie lanes and I’m not sure if they’re viewed differently/separately from the AHCIs.
-
Anybody familiar with how much resources these applications can use? For video transcoding with plex it would be 1 movie at a time, and video encoding would be done at a separate time. Windell talked about how he had 4 tablet streams running on a 8 core atom with no issues.
- I’m thinking ZFS2 but do I really need 2 redundants? I keep hearing about when a drive dies and they’re rebuilding the replacement, another drive usually dies. Why is this? Is this usually just in enterprise servers where the drives will be used more than my home drives?
Thanks for any information you can provide.
This is more of an issue with RAID where if one block fails the whole array fails. I'm not sure if ZFS works the same way or not. But essentially, past a certain amount of data (I think it's about 6TB or so) The chances of a single block failing during a rebuild becomes 50% which means that RAID5 is bad for that amount of data.
Will you actually be transcoding? If this is running on your local network rather than over the internet there's no reason to not stream the file directly rather than transcoding it in to a different format and bitrate, unless the device you're using to watch it doesn't support the format. At any rate You're going to have plenty of power to transcode media with plex, it doesn't take much.
I don't think it's a good idea to run pfsense and freenas as VMs but you can probably get it working. I don't know how well pfsense will work on whatever unraid uses for VMs as it lacks support for a lot of the drivers used by most hypervisors. I know it works well with vmware but other than that I've heard mixed experiences. With freenas you need to be able to pass the SATA controller to the VM for it to work properly.
If I were going to so it I would have the storage handled by the host system rather than having it in a VM. Unraid uses btrfs which I prefer over ZFS so If I were you I'd use unraid for your storage rather than running a VM for freenas. I would also run pfsense in a separate machine but assuming it runs okay as a VM it should be okay. There are some security concerns as well as possible performance issues but those don't really apply for a home environment.
This it is just a RAID issue and not a universal stripping/parity issue? I figured it was a RAID controller issue but wanted to make sure.
I already have a pfsense box and I think im just going to keep that. I just wanted to see if I could condense to one pc to save space/power. I knew about the security issues but am not too worried.
The transcoding would be for like my WiiU/PS4/Phone, the Onkyo receiver I'm going to buy can support everything by itself. Basically I wouldn't use the transcoding much at all honestly. Now I would like to send video clips of me gaming and what not, to the server to convert to the format I want for youtube. I've never looked into server encoding before but I've heard it's possible with freenas.
Really I'm starting to think just one 2670 would do fine and would keep my power bill lower, but two would be really fun lol.
It's definitely an issue with RAID. If ZFS can recover even if there are unrecoverable blocks then it's not a problem with ZFS but RAID will fail if there are any unrecoverable blocks and no parity data available.
Sorry, I mean another drive tends to die when rebuilding a previous drive, hence a reason to run raid6 instead of 5. Like a drive dies so you pop a new one in, while rebuilding that drive another drive dies. I've seen people mention this. Didn't know if was due to the drives being old or some sort of RAID controller induced death.
I think there are a number of reasons why a second drive may be more likely to die when rebuilding after the first failure.
1. Rebuild tends to puts more stress on the disks than a normal workload.
2. If you bought disks in bulk, they may share similar defects. Whatever caused the first drive to fail in your system may cause another to fail.
The full load from reading for the rebuild was the only reason that I could think of, and most likely being due to them all being old/bought at the same time.
I'm really torn. Do you think I should start with 3 x 3TB HGST Deskstar drives in ZFS1 (RAID5). When I add a 4th drive do you suggest ZFS2 (RAID6) for two forms or parity or do you think that's overkill for server that won't be running full load 24/7?
That's what I mean, during a rebuild you are reading every block in the array. Because the number of blocks is very high the chance of a single block being unrecoverable becomes almost a certainty. That is why raid 5 has a high chance of failing during a rebuild and why it's not recommended for anything after a certain amount of data (which I think is 6tb). But I'm not sure if zfs has that problem, it may be able to continue recovering the array after an unrecoverable block and you just loose the data associated with that block.
I'd still suggest using btrfs on unraid rather than zfs on a virtual freenas.
Dang.
Well I like the features of ZFS. I haven't looked into BTRFS but in one of Tek's vids it sounded like ZFS had the same and more options than BTRFS. I do believe that ZFS can skip bad blocks unlike RAID. This was one of the features that stood out to me (and always made me wonder why RAID didn't do it).
Well this raises some issues. Is there no way to make one pool larger than 6TBs that's safe? This could influence what drives I get. I was about to order 3 x 3TB HGSTs. If all else fails I guess I could just do a triple mirror -.-
Or is there a way to slow down the rebuild process that wouldn't stress the drives too much?
Well I found these benchmarks and it's pretty crazy what the compression can do to transfer speeds. Don't have to worry about that anymore. I'm thinking about sticking with 3TB HGSTs and just running with RAID6/ZFS2 and eventually I may add another for ZFS3.
Expansion of zfs doesn't quite work like that. To expand a zfs volume you need to expand with a number of disks equal to that of the original array. So to expand a RAIDz1 of 3 disks, you would need to add 3 disks. (effectively becoming a striped RAIDz1 with 2 disks redundancy). To move to RAIDz2 you would need to backup all the data and rebuild the array as a RAIDz2 and copy the data back on. You can not "just add another disk" unfortunately.
You can however add disks to a btrfs array, you can even change raid level if you have enough free space (and time). But if you really want to use zfs I would suggest running zfs on linux and using that as the host system for your VMs. While it's probably possible to run freenas successfully as a VM I think it will be more trouble than it's worth, and having your storage on the host system means you can give more space to your VMs without having to use iscsi or something like that.
Oh, I watched a video where a guy was talking about features of ZFS and said if you added a drive it could just add it and grow to the new size.
The only way in which an array can grow is of you replace the disks.
So say you have a 2TB array (3 disks) and one dies. You can replace it with any disk equal to or larger than 2TB. So for example you use a 3TB. It will act like a 2TB up until you replace other disks to also be 3TB. Then magically the array will grow in size.
To expand existing systems, there is that one pit fall where you just simply can't add a drive.
As @Dexter_Kane said Btfrs can do this.
1 Like
I really want to use freenas for zfs but I'm going to check out btrfs
If that's the case you're probably better off using either the zfs or btrfs equivalent of raid1 rather than raid6.
Well it looks like I'm going to go with 5 x4TB HGST in ZFS2. This way I have 12TB of usable space and 2 drive redundancy. I'm also going to go with 64GB of ram as I might run deduplication, might also use the SSD for ZIL, IDK.
I think I'm going to run unraid with freenas with a plex plugin and then a Windows VM. I'll add a gpu so I can use it for gaming when I need a third PC and I can run game servers off of it.
I would advice against 5 disks with ZFS2. I don't remember the in's and out's but basically, zfs likes to end up with an even number of disks after you remove the parity drives. So for ZFS1 and 3, odd amounts of disks and for ZFS2 even numbers.
Also.... I am a bit confused by your post. Are you suggesting you are going to run FreeNAS as a Vm within unraid or unraid within FreeNAS. There are a hundred and one reason why not to virtualise FreeNAS. (Take a quick google)
I also do not believe that the virtualizer in FreeNAS can handle hardware passthrough (but you may want to check that).
If your plan is to use virtualisation, you maybe better of going with Proxmox (as it can use Zfs filesystems) setting up a server with a share for disks. This way you would also be able to have your Windows Vm with Gpu passthrough. The one caveat to this is that it would be Zfs on Linux, not Freebsd.
1 Like