New member here seeking for a little advice from the community. I am currently running a headless windows machine, (literally inside the closet) for my file keeping/sharing needs and also some gaming (via steam in home streaming). The current setup is a i5 6400t on an h170 board, 16 gigs of ram and a gtx 960 2gig. I am also using two 256 gig ssd’s for OS and games and a windows storage pool of three random 1 tb HDD’s on parity (similar to raid 5 ) for my storage needs.
I was really content with thermals and power consumption with this build but it was lacking performance so I just purchased a Ryzen 5 1600 and the Gigabyte AB350 gaming mobo. I am also in the market for two 4tb Hdd’s. So, the final setup will have three disk pools, First with two ssd’s, two 1tb Hdd’s in raid 1 and two 4tb Hdd’s also in raid 1 for a total of 5,5 tb of ‘‘redundant’’ storage.
Question is; should I stick with windows as bare metal OS and use storage spaces with the problems that I already experience (slow transfer speeds, lack of password protection without an account, corrupted files etc) or use the extra cores of the Ryzen cpu and try virtualisation through unraid or something similar?
A couple of things to consider
The ‘‘server’’ will not have to be always on, I use WOL when I want to browse the smb share or stream a game.
Power consumption and thermal output are both limiting factors (electricity is expensive and closet is poorly ventilated).
I have almost 1tb of important files that must be protected, everything else I do not mind loosing.
I’m currently running a headless unraid server with GPU passthrough on a windows VM for steam in home streaming and like it very much.
I have Plex, teamspeak dockers for media streaming and gaming voice coms along with various file shares.
I used to have an ubuntu VM setup for some specific services I didn’t bother doing a docker for and for testing but have since shut it down to save on resources.
Specs are an i5 4590s with a 1050Ti. I have dedicated 3 cores to windows and the GPU and the games I typically run stream 60FPS at 1080p just fine. Power is reasonable and more importantly to me, is very low heat generation to maximize drive life for my array.
That AMD cpu will do well with many cores to devote to VMs.
I can’t speak to power usage over time as I always leave my sytem run for instant responses and power is cheap here.
Unraid does seem to be very easy to use and setup. The last time I looked (was a while back) at it, I was a little wary of their custom drive pooling setup. It was more for VM speed than a traditional raid setup with redundancy.
Unraid supports up to two parity disks per array for redundancy just like a traditional raid setup. The additional benefit is if you loose more disks than your parity redundancy can handle, the surviving data on the other drives is still accessible which is better than you get with a traditional raid.
Still parity is not backup and I still recommend doing that as well regardless of what system you are using.
You could also run linux (im thinking Fedora Server or Centos in this situation with cockpit if you are not a huge fan of shell) and MD raid with BTRFS over the top. At a later date you can install oVirt or KVM and run virtual machines on the same box. This is what my home NAS is setup like (minus ovirt/KVM as I have an ESXI cluster handling my virtualisation needs)
The other alternative would be Freenas, However, as brilliant as ZFS is, it has annoying limitiations when it comes to increasing zArray size as you’re media collection grows.
Unraid is based on linux and most of the hard work has been done for you but of course you have more flexibility in rolling your own.
Unraid also supports BTRFS as well.
This build (minus GPU and ssd’s) was, originally, a freenas box. I did set it up and was very happy performance-wise (100mb/s write speeds). However by using a UPS that allows me to monitor power usage, I did notice that I was idling at 40 watts even with the drives not spinning. That adds up a substantial cost on my electricity bill so I switched away from Freenas for that reason. That said, It would be great to see Freenas support for VM’s with GPU passthrough in the future so that one could get the best of both worlds (ZFS with Windows VM for gaming).
I wonder, is unraid optimised power-wise?
There is an initial release for ZFS on Windows. You could try it and report back. Also, don’t forget backups.
I’m in almost the same boat! Turns out my Server 2012R2 installation is getting kinda meh for what I need. Am currently running 4x4TB in a storage spaces "Raid 10"ish config.
Now I’m also contemplating whether going full on virtualisation or sticking with Windows Server is the thing to do.
I’m personally leaning towards Proxmox with a stripped mirror zfs raid for the 4x4TB pool and then running the rest of my VMs and stuff on my other drives.
I’d say, try the same, Proxmox or ESXi, with ZFS it may turn out to be a lot better than the shitty Storage Spaces.
Id go with something cheap and functional like ubuntu server or such, it can do pretty much all which ever unraid can do, and it’s free, not to mention SSH is just so much better then what ever M$ thinks they can call their remote desktop.
get some ZFS up and running, and some kvm passthrough/VMs/what ever your use case(remember googles your friend here, it gets dark, and wierd at times).
Windows does not shine a light when it comes to this stuff.
Basically all you got atm can be upgraded by FAR by making a ZFS raid partition, and add a smb share, and which ever services you got on your network.
beware though the memory consumption on ZFS is abit harsh(basically it uses what it is allowed to when writing/reading).
rule of thumb is ~1gb pr. tb, but it goes up by ALOT when doing I/O on the partitions.
Currently for storage i’m running a ubuntu Mate machine with ZFS(~8tb), and a desktop as workstation when im copying to the SMB shares it takes up upwards 17-19GB ram, but it is a 32GB desktop so it’ll do.
If you’re using VMs though ZFS is just beyond golden since the filesystem can cancel out all the 0’s in your VM harddrive file, by just slapping on some very light compression, were talking 5-10% on a single core when actively writing to the partition. But it can make a 250gb harddrive file into a 20gb file with pretty much no cost, especially with a ryzen cpu.
Yes, but not in the proper sense. It makes a separate btrfs filesystem for each disk. You’re better off with xfs/ext4 in that situation.
True, but that’s why they call it “unraid”. It provides much more flexibility for future expansions and a little more data security the way they do it but its a design choice. Like anything, has its ups and downs.
Yeah, I never said it was not okay, just saying you shouldn’t use btrfs for this situation.
I finally sourced and put together all the parts (I will do a build log). I am using windows storage spaces (again?) for now until I come up with something better. I combined two 1tb disks in raid 0 (smb share) and two new 4tb wd reds in raid 1 (backup). I am also using a robocopy script via task scheduler for nightly backups of the important stuff on the smb share to the raid 1. It seems that the performance issues (slow writes) are mostly eliminated. I am managing well above 100 mb/s writes on the raid 0 array. Then I back up the files every night to the ‘redundant’ 4tb space. I am still using NTFS though (with all its drawbacks). Thoughts?