NAS Hardware Advise Needed

These parts come in under $200. I’d need a case, power supply, cooler, and drives. Thoughts on this? It’s overkill for what I want in a NAS ( mostly for backups and media ). I’ve got a server with proxmox already for vm/labs. Any case suggestions that can do 4 drives?

It’s either I go that route or go with this:

I guess I’ll need a gpu. I’ve got an older 970 laying around but not sure if I should go with something that takes less power.

For the parts you listed, if you don’t mind the size, I love my Antec P101 Silent (can house 8 drives, was initially supposed to be my NAS and hypervisor build). I don’t like its weight, but that’s because of the TR1950x and Noctua NH-U14S-TR4-SP3 (I think) in it. The case is solid and the build is silent.

I’d go for a seasonic, fsp group or if you can find one, bequiet! 600w 80+ gold psu. Either Toshiba or Seagate ironwolf pros or exos (the exos are a bit louder usually than ironwolf pros). Don’t know what capacity you’re looking for.

Quick question. Do you plan to use this NAS as a storage backend for proxmox, or do you want just a backup server? Because if all you’re looking is for a backup server, then a 2 bay NAS is all you need (unless you aim for capacity, but if you want all the capacity you can get, the best ratio is raid-z1 with 5 drives).

If you want a backup server, just get an odroid hc4 or rockpro64 with the official case and call it a day. With the highest capacity 22TB sata ironwolf pros, you won’t be running out too soon. If you want a secondary storage and to combine it with a backup server, I’d opt for something like the rockpro64 with both 3.5" HDDs (for backup pool) and 2.5" SSDs (for normal tasks). It ain’t the fastest, but gets the job done (and it’s really cheap).

I have both and can vouch for them for different scenarios. HC4 is cheaper and can’t do crazy compression (went with insane zstd-19 and it’s struggling, lmao, will be doing compression client side in restic instead). The rockpro64 server is a wonderful little NAS, but still a bit slow (feels way faster than the hc4). Does run a few VMs at once from the ssd pool and I don’t feel they’re too slow (including a windows VM with gpu passthrough and all the game library drive on the same ssd pool), but I just got a new dedicated NAS + hypervisor all-ssd build I’ve yet to fully deploy (been procrastinating). Note: the rkpr64 is just a NAS backend, not running the VMs themselves (it’s just an iscsi and nfs server).

My opinion would be to add a few SSDs (maybe 2 in mirror) to the proxmox server if you need more space there and use the 2nd server as a dedicated backup server. This will significantly reduce the cost of the build and you can get away with just gigabit on it (because after an initial zfs-send, everything else is incremental and transfers pretty fast anyway - or if you use something like restic, you get deduplication, so still decently fast backups). Besides, 10G Ethernet for writing to a mirror spinning rust will be a waste. 2.5Gbps might make some sense to squeeze the final few MBs that the drives can write, but not worth the extra money IMO (for a mirror, that is, if you get a 4 or 5 bay NAS with stripped mirrors or raid-z1, you’ll probably want 2.5Gbps).

For a backup server, you kinda want to go cheap on the hardware and spend as much as you can on the drives. My hc4 has the highest capacity, 22TB mirror backup pool vs 10TB mirror slow data pool and 1TB flash mirror pool on rkpr64 vs 8TB stripped mirror flash pool on my thinkpenguin 4-bay NAS. You can see that all pools combined get right to the size of the backup pool size, but the backup server, my hc4 is clearly the weakest of the hardware. This is intentional.

Of course, don’t use more than 80% capacity of any pool if you don’t want performance degradation, but on the backup server you wouldn’t really care, so that makes the backup pool large enough to permit you hold a bit more backups, not just the latest copy.

1 Like

The servethehome forums has a guy selling a bunch of xeon d 1541 and 1521 boards right now. The total cost will be a bit more but the features are better and you will use waaay less power.

If the board you get does not have 10gb, look for an x540 based nic. Usually about the same cost but better watts.

If you are transcoding, a nvidia t400,i or 1660, is the lowest cost gpu at the moment i think. Been a bit since i looked though.

3 Likes

I haven’t seen or come across these boards. The 1521 looks good enough for what I need. I’ll have to find a case for it.

Just missed a deal on the ASUSTOR AS5304T for ~$400, a 4-bay with dual 2.5GbE NICs and Btrfs support. I am not a fan of QNAP after I got burned on forced-obsolescence due to QNAP refusing to update older NAS models with SMB 3 support, then had a SATA port partially fail just outside warranty on a 4-bay I replaced it with.

As opposed to the QNAP I’d counter with the Synology DS923+, and while it’s inexcusable that Synology is so stingy with multi-G NICs you can put a 10GbE NIC in the DS923+ and it will still be cheaper than the QNAP NAS in your link.

If you do go the DIY route just be prepared for the noise, heat, and power consumption. Those really old server processors don’t have much of an idle state and neither do the boards. A prebuilt 4-6 bay can generally run under 40-60W.

That the one you wanted?

I can see QNAP being :poop:y. But I can’t agree on going Synology either. Asustor, maybe. Both synology and asustor have models that can run linux, which should be prioritized.

While you “could” run linux on the ds923+, you don’t have video out and neither serial console, so you can’t change boot options, or even access the bios.

I find any device that cannot run another OS to be completely unsuited as a NAS, exactly because of this:

The same criticism of QNAP can be applied to basically all other vendors like Synology and Zyxel. Just get SBCs that you can run any OS you want, like pine64 or odroid boards (which is what I used and can vouch for, at your own risk, you can go with others, like orange pi, radxa, lattepanda and other offerings).

Or if you want a “proper build” that you don’t have to put together, just slap the drives in, get HPE Proliant MicroServers. I’ve used multiple of these in productions as NFS NAS boxes (it wasn’t my idea, but they worked). The only thing I hated about them was that they could only boot from an on-board SD card in the case (which you had to put your /boot to when you installed your OS - although you’re probably supposed to run the whole OS from there, but we just removed the DVD and slapped a small SSD in with the rootfs on the ssd).

I still prefer SBCs for home use. They’re cheaper, quieter, use less power and they aren’t inefficiently utilized (i.e. mostly idling).

I haven’t seen many sbc’s with dual nics and can handle 4 sata connections or more. Have any that you’d recommend?

Nice, that’s the one. So at least you have options.

Friend of mine got it off Amazon for that price, but apparently QNAP, Synology, and even ASUSTOR offer software raid btrfs support now. And it’s hard to beat the value in the dual 2.5GbE NICs. I have an older DS1618+, and liked it enough that eventually I snagged a cheap 10GbE off ebay for it. A single backup of my SSD will saturate a 5GbE link, but practically anything saturates a gigabit NIC these days and there’s no excuse for Synology not offering them.

That is true. But I’ve had my current NAS for five years, and even if support ended tomorrow I don’t see anything breaking on the horizon. With QNAP the NAS only supported SMB 1 which was truly ridiculous given SMB 3 came out while they were still being sold, and the security issues that were so bad even Microsoft deprecated SMB 1 support. Technically one can still add SMB 1 to Windows 10, but I wouldn’t recommend it for anything outside a very private home network.

Okay, those Microservers look pretty cool. But for the cost I still prefer Synology’s DSM, given my ineptitude I’d rather not risk my data learning to do the software via trial by error. Pretty nice option for those that have the software chops though!

Only thing that comes to mind is an Odroid H3 (dual 2.5Gbps NICs) with a m.2 sata card, like this (as the odroid can only run 2 drives).

But then if you use spinning rust, you’d need an external power source for the HDDs. Maybe you can get a 200w SFX PSU and get an “always-on” PSU connector and have that power the drives and board (you need a sata or molex to 2.1 x 5.5mm barrel jack - the board takes 12 to 19v in). Should be doable even for beginners, with readily available parts.

Don’t buy them new. :slight_smile: Buying used synologies might be cheaper, but they’d come likely with outdated / EOL OS. The proliants can just run linux.

The gen8s were what we used (we also had a gen10). Ours were new of course, but if I were to buy them, I wouldn’t buy them new. And if I was the one to buy the NASes for business, I wouldn’t buy the microservers, lmao. They’re great for home use and very small businesses though, but we were medium sized and had 5 racks, but were still using stacked microservers.
:triumph:

Not difficult if you just go with an already super-stable stack, like truenas core or openmediavault.

  1. I need to see if I can get my hands on that SATA-adapter
  2. Odroid H3 with 2 drives in its enclosure and another 4 in a USB-HDD enclosure is my “less janky” NAS for now. Running it at 2.5+1Gbit, it seems to handle that just fine throughput wise.
1 Like

How are you powering everything up?

The enclosure takes 12V via barrel jack (from its powerbrick), the Odroid has its own powerbrick.

1 Like

used Datto gear is dirt cheap, Xeon-D based, and based on regular hardware. you can even flash the BIOS back to OEM if you want.

run any OS, comes in several form factors, if you have any tech ability it is the way.

I ended up doing the Supermicro 1521 board, 2x14tb Exos drives, used an old Optane m.2 for cache, an old ssd for Truenas Scale, 32gb ECC memory, an old 700w psu, and a Fractal Node 304.

I got a lot of bang for the buck using my old stuff I had laying around. Thanks for the help! Very happy with it.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.