So this is a placeholder for now as I will be working on this in the AM I belive.
After watching @Wendell in his video reviewing another NAS case and his mention of the optimal NAS layout I started pondering… my wife limits my purchases each month…so 100 to 400 for a small formfactor nas specific case is a bit steep for me. I realize these come with backplanes for hot swaps etc but still…
SOOOOO I am going to attempt to make an itx NAS from a Cougar QBX and a few phanteks drive cages. Case is a relic from my ITX builds, but not too pricey, and cages seem maliable enough for modification and I will be posting pictures and my experiances here. Seeing as how this build will be based on a Ryzen 2700, it will probably run Proxmox and maybe true nas… not sure yet. Probably be some variable bastardization of the forbidden server build. Id like it to be an automatic backup location (as a test and to learn how to set this up) for home server files, steam cache (close to 14TB or more) maybe testing vms and such before moving them to “production” home server.
Well enough ranting, hopefully tomorrow Ill have more information and ranting.
OK those are nice. I guess I missed that part lol. Still Because I have it on hand I am going to mess with it lol. I still have the Dell T420 but that seems excessive at this point at least for now till I can get hard line in the house…maybe later this year.
So messing with the spare parts could be fun. And its smallish still with plenty of cooling. SFX PSU should give me some room and it has a unique removeable “plate” as part of the chassis that could prove an interesting mounting option. BUT like the wise @wendell said most ITX boards (at least consumer) only have 4 sata ports. I hope to try a NVME adapter OR have enough room for a PCIE card once everything is mounted. I was hoping to keep the PCIE open for a 10G NIC OR I have a plan to use the inner outer shell (Between case cover and motherboard mount which has a small open space) as a mount for the card via a pcie extension cable… Lots of ideas see how it goes.
The QBX Cougar case is only 30-70 USD depending on source… So still economical and the Phanteks HHD cages were only 7-9 USD per pair and have enough mounting flexibility it should be fun to build.
Right now it’s going to have a Ryzen 2700 CPU with AMD stealth cooler, ASUS ROG STRIX X470-I CPU, 32GB 3600 G.Skills ram and Silverstone SX 500 Gold SFX PSU and of course Cougar QBX case.
How many HDD is really necessary for a home NAS? The reason I ask is that capacity is going up and assuming you are building something for the average home use you want noise and power draw at a minimum which means the fewer the drives the better.
Currently my home server has 6 drives, 3 HDD and 3 SSD. Realistically I could cut that down to 3 if I wanted to min/max. 2 HDD mirrored that are high capacity and one SSD for the boot drive and VM/container disks.
I’d recommend a minimum of 4 in a RAID6. This way, you can loose any 2 drives before loosing data. Yes, a RAID10 might be “faster”, but loose the wrong 2 drives and your data is gone. I’m currently working on getting a RAID6 NVMe array on a Aliexpress adapter board in a PCIe x16 slot. Gonna be EPYC (Epyc 7551P CPU to be precise )
Alternatively, the price of Flash-storage has been dropping like bricks recently, a 1TB PCIe Gen 3 DRAM-less NVMe drive is about 50 bucks or less, 2TB are about 100$ (US!) and I’ve seen 4TB drives for about 200. Have one as cache in front of your HDD RAID6 and enjoy crazy write speeds while still securing data redundantly after the cache has flushed to said array.
I’m shooting for 6 HDD either raid 10 (probably not as its not a used data set and probably zraid 5 so I have the most storage capacity as the data is already backed up remotely and locally and via a SSD in a fire safe for vital information), this a redundant back up and experimentation platform. Yes you probably dont NEED 100TB + but NEED is relative lol. I know my data horde is close to 40TB give or take a TB.
I plan to experiment using rsync and cron jobs just to see if I can get it to work properly. Also a steam/origin/ea/rockstar cache locally for funsies. Also, I want to try saturating my 10G NIC’s with SSD’s so that would be a modification I might add as well.
So this is more a test of the cases compatibility, and the system itself will be my test bed so I don’t have to constantly hear “Where are my files???” from across the house all the time. My redundant BU used to be a WD My Cloud EX2…but they stopped supporting it and I couldnt run VM’s on it so this expands what I can do for fun. I went down a rabbit hole of playing a plethora of games on the new CPU (7900x3d) and GPU (RX6950XT) and overclocking using curve optimizer etc… but I digress… So what I mean to say is I’m back after a small hiatus Missed ya all and look forward to more good times
I’d love to do 3d prints, but the investment is a, little beyond my budget…for now waaahaaa haaa
AND I’ll also be fighting with Corsair over a PSU issue I discovered after my new build which makes me wonder if it is the core reason my 15 amp ARC Fault breaker is tripping for my office randomly (I still annoy the wife with the extension cable to the bathroom next to my office lol.
And, I’m youngish but retired so I have nothing else to do lol.
You the man… I really appreciate that sir. I do honestly. I think I have something working. I could only do four drives so far… BUT I made it work… Pictures and testing to follow Glad I have all these parts…
I’d say that if losing the wrong two drives is a disaster for you, your backup strategy needs looking at.
Still… anecdotally…
I’ve been running a home nas in dual mirror ZFS for 11 years. I have had two drive failures in total in that time and lost nothing (all drives have been replaced 3 times).
I’ve run enterprise arrays at work for 15 years and never seen a double disk failure.
Not saying it can’t happen. Just saying it’s more likely you will lose your data due to theft, human error or house fire and you should take that into consideration rather than obsessing too much over multi drive failure.
Protecting against those higher probability events with an off site backup of your critical stuff will cover you for double disk failure as well.
Above assumes that you replace drives reasonably promptly of course. Left alone of course every array will eventually fail.
My average time to replace a failed drive during the above is 3-7 days (though last failing drive in one side of my home ZFS striped mirror was left continually re silvering for a month until that drive entirely dropped )
Also, don’t make dumb drive choices. Use drives compatible with raid. Use ZFS if you can to reduce rebuild times on slightly flaky hardware.
Also make sure you have alerts via email or whatever for drive failure. A robust system that keeps working is useless unless you resolve faults when they happen.
Well project is up and running. Interesting side note I can confirm that the ASUS ROG X470-I will run and boot without a GPU in the system with a Ryzen 7 2700 CPU. I used my cheap GT 710 to graphically install Proxmox and then removed it. So I am excited. I have Proxmox installed as I am more familiar with it than anything else. It will run on the Samsung 980 NVME ssd on the back of the motherboard leaving the front one open for expansion to 6 HDD’s total… I hope… here’s what we have so far…
Here is the case and parts I will be working with…for now…
Here is that removeable side panel I mentioned earlier in the up position… I had planned to mount the drives to this when another idea hit me seen below…
This configuration would have worked well I’m sure till I had another idea and epiphany. The PSU has a pigtail inside so why is it stuck being in that one location? I’m going to re-mount it elsewhere to make more room for drives!!! See Below…
So for just using 4 drives I decided to do then mounted vertically for nice airflow top to bottom. When I get the NVME to sata adapter I will probably mount them stacked one on top of the other and put a fan up front in the case. I used the air holes in the back of the case to screw into the drive cage holes that hold the drives in place so I didn’t have to drill holes. Luckily the space for the PSU was perfectly aligned to access the rear of the drives to connect them to the motherboard! What luck woot woot LOL
Here is everything all together and it WORKS yahoo…now for VM install and possibly add the 10G NIC. There is roughly a inch (25mm) between the back of the PSU to the stealth cooler. I mounted the PSU intake fan facing out to draw in fresh air and port it out the top. You cant see it but I mounted a 120mm fan directly on top of the CPU and PSU to vent out the top of the case, and I have two 120mm fans in the bottom to push air up. The sides of the case are mesh so the case should breathe fine.
We have a 12’ x 14’ walk-in with shelves and clothes racks. I have roughly 8" of that space to hang my trousers. Shirts are in another closet. My wife told me last week I needed to get rid of the trousers I was not wearing because “you are cluttering up the closet and I need some space.”
LOL You feel my pain… Luckily I keep the office closet so “Messy” she wont touch it lol. Only way I get closet space. She has filled two closets and i get my 1 foot out of a huge walk-in like you said and she even has all the DOGs clothes in another closet. And the linenl closets (all 3) are hers as well. Such is married life.
And I made a minor boo boo… I set the Static IP for the Proxmox server the same as the truenas vm so I couldn’t reach either after a little while…always the small things. Why I keep copious notes.
Oh and I somehow TOTALLY forgot to enable virtualization on the CPU…wow big miss there lol.
No big deal, if money was nothing I’d happily do the same or have a rack in my closet but I’m just working with what I have. It’s all still way more than I need as I am just learning networking and messing around at home as a hobby.
I have often thought that it would be nice to design a case with protocase (or someone else) that uses supermicro backplane parts that are usually readily available and hotswappable from generation to generation and just change the case layout to make it more compact and accept regular atx boards and power supplies… Food for thought for the community…