Need some suggestions for a NAS build

I plan on migrating from my current Synology NAS to a “real” storage server in a few months for more speed, capacity and more efficiently (currently have a mirror, that’s kinda wasteful for my purposes). I currently have a lead on a E5-2683 v3 (14C, Haswell, 120W, Quadchannel) with X99 board that I could potentially get for an okayish price. But today’s video got me thinking again whether current gen hardware might be an option and wether I even need that much.

I’d probably start with 5x8TB 3x16TB drives and TrueNAS scale. For the beginning I’d go with Gigabit, because it’s too expensive with my current resources to wire my home with 10G, but I’d want the option, both in Interface (could be supplemented with a PCIe card at a late date) and in CPU performance. For memory, 64GB of DDR4 can be had for a decent price, apparently I should have 1GB of RAM per terrabyte or something with ZFS.

Edit: Obsolete

For the Xeon option Hardware would be as following, rough prices based on German pricing:
CPU + Mainboard + Cooler - TBD, I’d imagine somewhere around 300€
Case - 50€ (something rather cheap with enough airflow)
Memory - 200€ (DDR4 2166 CL15)
PSU - 70€ (500W)
3 Harddrives - 700-800€ (8TB WD Gold matching the 2 I already have)
1TB cache SSD - ~100€
= ~1520€

I’d like to kinda stay around the 1500€ mark, but power saving can be factored in for a year or two (~30ct/kWh).

I’m not sture about the amount of RAM and SSD cache, the CPU performance and how to best get a quick display out for debugging (is there something easier than an old GPU, that’s pretty cheap and maybe hotplug?). The usage scenarios would be Plex, a video render target (lowish latency but sequential reads) and potentially a harddrive replacement for my desktop that’s fast enough to launch games off (doesn’t have to be as fast as local, but usable, this is also potentially where 10G comes in).

I also haven’t figured out yet how I’d best get a web file UI, like DSM has and potentially a web dav share for usage outside of my home.

Edit: New Plan:

  • 12500 - 220€ plenty of good performance + good hardware en- and decoding + option to upgrade for a decent price + bit more GPU than 770
  • MSI PRO Z690-A DDR4 - 195€ 6x SATA + 16xPCIe 5.0 + 4xPCIe 3.0 + 1xPCIe 3.0 + 4x M.2 + Intel 2.5G LAN
  • 32GB generic G.Skill RAM - ~100€
  • Samsung 980 1TB - 95€ system, cache and metadata drive
  • Cheap case with mesh and 2 fans - ~90€
  • SeaSonic Focus 550W - 90€
  • 3x MG08 16TG - 760€

Total: ~1560€

Link current list: PCPartpicker

FWIW, I’m just beginning to plan my first ZFS based NAS, and have come to the conclusion that, even with just SATA ssd’s, I’m just can’t waste my time with < 10Gbit. This is the cheapest ($300) switch with more than 2x10G ports that i could find on Amazon US:
TP-Link TL-SX105 | 5 Port 10G/Multi-Gig Unmanaged Ethernet Switch
Good luck, and have fun.

Well I’d need 2 switches (microtik has some 200€ ones, without transceivers), 3x 10G Expansion cards (~100€) and about 20m of fiber, so yeah it’s gonna be about 800-1000€. It’s definitely planned, but I’ll need some more cash to spare before I go for it.

To be clear for my current use case, mainly just a dump for backups and large files, Gigabit is plenty I mainly need more storage than my 8TB mirrored and AES acceleration that my arm based Synology doesn’t have.

Is ECC memory a hard requirement otherwise you’ll be struggling finding recent Xeon hardware at that price point. Given your use case you’re most likely better off with a recent Intel platform even with less cores. You also get a much better price/performance ratio if you look at Toshiba MG08/MG09 drives especially in .de and I’d drop the cache SSD. Looking at your wishlist you’re probably better off running a generic distro/OS than trying to shoehorn everything into TrueNAS.

Well I can probably get that exact platform from someone I know, price is not set yet, but I’ll probably get a rough quote tomorrow or so (I mean it’s also not really recent, as far as I’m aware it’s the remains of an old home server basically). ECC is not something I need otherwise the memory costs will probably explode.
Intel would of course be nice for the iGPU (for debugging), but something like a 12700 already costs 375€ without the motherboard. For the motherboard I’d need quite a lot of SATA if possible, so that’s probably something like a decent B660 board, which will also cost quite a bit.
Regarding the drives I’ll probably stick to the WD gold because I already have 2 and as far as I know, mixing and matching isn’t a really good idea.
Is there any specific reason you wouldn’t use a cache SSD? From my thinking it would probably benefit more frequently accessed small files and as far as I know ZFS can also cache metadata to make is a lot faster to search and just in general get to the data in the first place (although I could maybe scale it down to 512 or 256GB).
Regarding the OS, why do you come to the conclusion? For me TrueNAS Scale sounds like a lot better idea than trying to build it all myself. TrueNAS has the UI for example that can manage basically the whole system, which would probably be a lot easier than trying to setup SMB, NFS and all the stuff up by myself (and optimizing it). It would have the benefit though that I could use btrfs and it’s variable data safety.

A NAS has so much potential for customizability depending on needs. Especially when it comes to ZFS.

ECC is not something mandatory and the 1 GB / TB only makes sense if you use things like deduplication, or you want a lot of RAM for caching. 16GB of RAM are plenty, especially if you don’t have files larger than 16GB. But 64GB should still get you a long way.

I would say a 6 or 8 cores Zen 1 or 2 is better than an old Xeon. And that’s before you downclock it for power efficiency. And you don’t need an SSD cache, unless you really want a ZoL or a big L2ARC or a dedicated metadata special device (for faster directory listing and file searching).

Besides, how many people are going to use that NAS? If you got less than 10 people using it, don’t bother with big memory or SSD caching. A 5 drive RAID-Z1 would be fast enough. Heck, I ran a RAID1 on 2 disks with 8 GB of RAM and a dual core CPU. For 80 people! Mostly for small files on a Samba share. And the server wasn’t breaking a sweat. And to add insult to injury, it was a core 2 duo with DDR2 RAM in a 1U case. I later moved it to a HP proliant microserver gen8 with a RAID10, for higher capacity storage mostly, but also because I was afraid that hardware was going to fail one day (it was running on borrowed time). And that thing had 12GB of RAM and a Celeron dual core (probably haswell, I don’t remember, I had it laying around, decommissioned from serving as a virtualization NAS).

As for the networking, I’d say you can go with 2.5Gbps networking. You don’t need to rewire your house and it’s still more than double the throughput. You should find pretty cheap 2.5G switches. If you don’t need lots of ports, then even better. I personally run a Zyxel XGS-1210-12 port switch (2 SFP+ 10G, 2x 2.5G ethernet, 8x 1G ethernet). At some point I’m going to get another one and connect them via 1 SFP+ port. I’ll do some redundancy shenanigans once I get two of them, via the router ports (active-standby or even balance-alb).

Back to NAS. IMO, the first thing people should be thinking about after the capacity is power consumption, then if it’s too slow, speed. No point in running something super fast if it’s going to be idling 99% of the time, and when in use, only using 5% of the CPU.

Regarding the CPU I was mostly worried about the parity calculations. As I want to design it as 10G capable, the CPU would have to handle 1GB/s worth of data in parity calculations, I don’t know though, how parallelizable this process is. I’m thinking it is and that’s why I was going to go for more cores, rather than less, better cores. But it’s hard to judge what I need, that’s why I’m here.

I won’t have a lot of people using it, mainly me for now, but I’ll use it pretty heavily and intend to replace my harddrive (because it’s loud) with a network share, so I’d want it to be pretty low latency and I’d want to use it as basically as SAN for some applications on my server to, so a medium amount of traffic, maybe equivalent to 1-10 people. If the server were to be a newer Intel CPU I’d probably also want to throw Plex on the for HW transcoding of AV1 and HEVC/VP9 10bit (would be completely hardware though, so not a lot of CPU uasge). Regarding caching I’d probably use it as a special metadata device, but I don’t know how much storage I’d need in that case and maybe a few GB write ahead log to be really fast, when I dump a few smaller files on it, depending on the scale a small M.2 or SATA SSD is pretty cheap.

Well I want managed switches anyway and with that the costs of 10G and 2.5G are sadly not that far apart. Putting the fiber into the wall shouldn’t be to bad, they seem to be pretty inexpensive and in Germany we have ducting everywhere.

In my opinion, these are the things you need to consider on a NAS, in order of importance:

  1. Power draw. When a modern 8 core comes in a 65W TDP envelope and sub-100W powerful machines are possible, I just don’t see the need for a power hungry server platform from yesteryear. Sure, you could… But why? Even a 12100 or 12400 is more than enough NAS power, these days.

  2. Chassi. With 4 TB SSD drives becoming affordable (though mechs offer 20 TB for the same price, today), a 2.5" chassi might be of interest. For most home uses, an 8 disk chassi is plenty of storage. There are desktop cases out there that could fit up to 12 disks in theory, but if you want more than that you need to go rackmounted or build a DYI.

  3. Throughput. Do you need 10 GbE? Be aware that HDDs have a maximum transfer speed of ~1.1 Gb/s, with the possibility to double that to 2.2 Gb/s with RAID. This would mean, no matter how beefy your server, you will never use those 10 GbE unless you are striping 4 disks at once. So any application that requires more than 2 Gb/s needs a SATA SSD, more than 5 Gb/s almost certainly requires NVME.

  4. At a distant fourth place is the performance. Like I said, even a 12100 is more than enough for a NAS, though you might want to throw in a HW encoder/decoder in there somewhere too. Current sweet spot right now, IMO, is the AMD Ryzen 5 5600G, but it looks like the next gen Ryzens will all have HW encode / decode built into them.

So, with all that in mind, here is a 5600G build that might make sense for €750. As always, this is a suggestion, feel free to pick together something better. This does not have ECC, but ECC is pretty overrated for SOHO anyway (short version: ECC saves you about 1 hour of work per employee per year - don’t buy an expensive server platform just to run ECC, invest in proper backup instead). Prices here are from

Part Model Price
CPU Ryzen 5 5600G €165
Motherboard GIGABYTE B550M DS3H €79.89
Memory G.Skill Aegis 2x16 GB, 3200 MHz CL16 €102.79 x 2
System drive Samsung EVO 970 250 GB NVMe €46.90
Case Silverstone CS380 €165.59
Power Supply Be quiet! Pure Power 11 ATX 400W €50.54
Total €713.50
1 Like

Seems reasonable. I’d go Intel (12400) for the better de- and encode (I have AV1 files that might need to be realtime transcoded and I probably don’t really want to wait), but the rest seems reasonable. I’ll probably go HDD first (basically just expanding what I already have) and might add an SSD array later, if needed, but capacity it is for now. For the network you might be right. I don’t know how good ZFS scales with multiple disks, the Gold disks can do up to ~255MB/s, but that’s something for the future, but if the 12100 can do it, the 12400 should do it fine and I could always upgrade up to a supposed 13900 or so of course. For a case I’ll look for something cheap with mesh front and 8+ 3.5 bays, I won’t see it pretty much anyway.
As for ECC I also don’t think it’s work it for a budget-ish server, would have used the Xeon with non-ECC anyway.

Basically this would be something I could come up with: It’s ~750€ without drives and only 32GB of RAM, but according to the comment above that should be plenty.
Seems like I need Z690 to get a 2nd slot that’s not 1x, so a bit more expensive, but it has 8 SATA natively and 2.5 Gbit LAN, so could be useful.

This is a rough and dated estimate and is based on the amount of metadata you get with your datasets and zvols. Today we have options like NVMe for L2ARC and special vdevs to store metadata on cheap devices that are not HDDs (slow), but memory (ARC) is always preferred, because DRAM is amazing. Memory is always good to have on a server.

If you are using lots of zvols or datasets with 16k or lower record/block size, you want some solution for your metadata that is more performant than fetching it from RaidZ HDDs.

HDDs. If you’re already sold on the WD Gold, all fine with that. But you could also get e.g. 3x 16TB enterprise drives (Seagate Exos/Toshiba MG08 ~250€ at Alternate/Mindfactory) and get 32TB usable storage (same as 5x8TB RaidZ) and having the two old WD gold as backup drives. Just an idea.

12400should be plenty. Be sure to use compression and I recommend setting it to zstd compression(for anything that isn’t already compressed media files). Might be too little horsepower for 10Gbit realtime compression, but 6 cores should work.

1TB cache SSD. Will be fine, L2ARC aka cache is very easy on SSD endurance because 90+% are reads (on a read-cache, who would have thought?). When talking SATA SSD, just avoid budget-brands like Samsung QVO or MIcron/Crucial BX series.
Make your L2ARC persistant (set vfs.zfs.l2arc.rebuild_enabled to 1) to keep your cache after a reboot.

Remember that special vdevs have to be redundant, so you need at least a mirror for your metadata+small block vdev.
The old rule applies: you lose a vdev, you lose the pool.
L2ARC only needs a single drive because it just holds a copy of data already present on the HDDs.

1 Like

Can I enable it on a per directory basis or so? The majority of the raw capacity is being used by media files that aren’t compressible.

Sounds good.

So I would need 2 SSDs if I wanted a special metadata device?

Compression is a property of a dataset (mountpoint, directory, filesystem are other terms for it) and gets inherited by all children (sub-directories if you so wish) by default but can be changed there too.

Compression is great because the data in the cache is stored in a compressed state, meaning you get more space out of your memory and cache device. I had 230GB of data stored in my 96GB ARC at some point just to give you an idea. But when talking VMs, Containers or my steam library, compression is just awesome.

I think TrueNAS ships with LZ4 by default, which is super fast and has an early abort if the data isn’t compressable. It’s like GBs per core per second. It’s basically free real estate. ZSTD is the new kid on the block with very good performance and better compression than LZ4, but performs worse for uncompressable data.


Every vdev has to rpovide their own redundancy. Your RAIDZ can handle one failed drive. But if you only have one drive as a special metadata+small block device and this drive fails, the pool fails because ZFS doesn’t have metadata anymore to ask the HDDs where everything is (generalizing).

Each vdev has to provide their own redundancy. If you want one drive of tolerance, you need a mirror (2 drives) or RAIDZ (3+ drives) for your special vdev.

1 Like

Those drives actually look really good, their MTBF is higher and they have more cache (I can only judge from the datasheet though). That will also make it cheaper to expand it once vdev expansion is there and 2x8TB spare will make data transfer way easier.

I’m using Toshiba MG08 myself and got no problems in 9 months of basically 24/7 operation (3 different batches of drives). They are a bit noisy though, I wouldn’t want them doing random writes in my living room. I heard the Exos aren’t like that, but noise is always a bit subjective.

I keep spare HDDs as a backup pool (RAID0) in TrueNAS that gets plugged in once a week for replication/backup job. Always good to have backup. And ZFS backup is super fast because it only needs to transfer what has changed. TrueNAS GUI is really good in doing this with like 4 clicks or automate it all together.

1 Like

Well I can hear the golds through the whole basement, they are really really noisy.

The question would be how much space would I need for a special metadata device? Is a 100GB enough? It probably depends on the number of files too, but it’s kinda hard to grasp for me.

NVM, found wendell’s post: ZFS Metadata Special Device: Z

We’re talking about sub-100GB worth of metadata for a 32TB pool. And considering you mostly store media files (dataset with >128k record size, I use 1MB record size for my media), a special vdev just for metadata isn’t worth it.

People here use these special devices mostly because you can optionally also store small files there. And this is a very nice thing indeed if you know how bad HDDs perform when you tell them to get you 1.000 small files. Small files on fast SSD and the big chunks on HDDs is a nice thing to avoid the worst aspects of HDDs.

So if you feel like your HDDs are doing a lot of random reads and writes on small files, this option becomes very attractive.

I’d stick to those 32GB of memory and the HDDs for now and add a L2ARC or special vdev for performance tuning later. Or get a 500GB-1TB NVMe SSD and use it as cache, you can always remove it and repurpose for other things.

1 Like

Yeah kinda hard to decide. In size it’s mostly media files, but I do have a substantial amount of small files in count too. Which will probably expand as I might migrate some stuff over from my server, so that my server stores less files (which it currently does store, without any redundancy). But I think your suggestion is valid and I should start with a 500GB (cache and boot) drive or so and expand from there.

To add some data: Media dataset: 6TB, 2k files; home dataset: ~150k files and expanding rapidly, 1TB (although the size can probably be reduced if removing the photos, etc from it)

Let’s say you store 30TB of media in your pool/movies datasets with 1M recordsize. That’s <5GB of metadata which is trivial and more of a foot note for 32GB memory. But if you got a 30TB database running on a 4k dataset, that’s 256 times more metadata, so you better get an EPYC board to accommodate your new TBs of metadata in memory. And we’re not talking about caching any “real” data yet :slight_smile:
These are extreme examples, but they can give you an impression on the subject.

We all do. The benefit of small files is that they’re small. You can store a lot of (compressed) small files in 500GB, be it L2ARC or special vdev.