My first NAS: is ECC worth 200$?


  • This is my first NAS and I’m planning to use TrueNAS scale.
  • I already have an i5-4690, some ddr3 RAM and can buy a used mATX motherboard for 30$
  • If I want ECC, I’ll need to buy motherboard+CPU+RAM, which is more than 250$ (shipping included)
  • Is ECC worth it?

The full story:
I was bored and tried to calculate how much it’ll cost me to make my own NAS, vs buying one. Turned out that if I don’t need hotswap, I can actually make one pretty cheap! (Quote: NAS killer project (no link, I don’t have the permission))

I don’t have any particular use case in mind, so I’m planning for a 2 drive ZFS mirror setup and scale up as I need.

I’m thinking to use fractal design’s node 804 for the case: it looks decent, has a lot of drive space, and shots be in the nicer side to work with.

I won’t shuck drives, since living in Japan means shucking doesn’t really help in terms of cost. (Or maybe it does, but I’m too lazy to search)

Now back to the title: the CPU, motherboard and RAM.
I think there’s 3 types of combinations I can go:

  1. Use my CPU(I5 4690) and RAM lying around, and buy a used motherboard locally. That’ll be around 30$.
  2. Buy a used 4th gen Intel xeon CPU, motherboard, and ddr3 ECC RAM. That’ll be 250-300$, with shipping. I’ll have ECC, but no expandability.
  3. Buy an AM4 consumer motherboard with ECC support(ASRock had some if I’m correct), CPU, and ddr4 ECC RAM. That’ll be 300$+, but should have the most expandability. Also, bonus points for being able to use an m.2 SSD.
    Oh, and I’ll need a GPU for option 3. Although something like gt730 should be more than enough.

So basically, the argument can be boiled down to the title: is ECC worth 200$+?
As for the budget, everything I listed is within reach.

1 Like

I think no, not for a small home NAS.

ZFS doesn’t support bring converted from mirror to something else in place, can’t add a third disk and go to a 3-way raidz, and then a 4th and make it a 4-way raidz… not how it works.

Top-level vdevs can only be removed if the primary pool storage does not contain a top-level raidz vdev, all top-level vdevs have the same sector size, and the keys for all encrypted datasets are loaded.

Btrfs does, but there are some corner cases that make dealing with failures tricky because of two reasons.

  1. Some implementation choices were truly sucky until a 2-3 years ago, and internet is full of old advice that’s scary and there’s a lot of “you can’t”

  2. There are still some sharp edges and corner cases - in a sense that it’s somewhat easier to do the wrong things and lose data with BTRFS than it is with ZFS or other storage systems (… but there’s a bit more documentation).

mdraid / LVM raid are more traditional Linux-y ways of handling multiple devices. If you use them, make sure you enable integrity options - they basically make your data checksummed, your writes journaled and your block devices copy-on-write - which is very safe but slower than without it.

Snapraid is very flexible and nice for archival storage, and media storage, it’s a file based raid.

Always have backups for data you care about.


I’m assuming he wants to add another Set of Mirrored Drives when needed?
That’s usual practice with ZFS for Performance-focused Pools or growing over time.

You could definitely run TrueNAS Scale without ECC.

But there is a certain risk since ZFS can’t notice Errors that are produced in your RAM and will, in the worst case, silently corrupt your data.

While this is unlikely to happen, it is a possibility.
So you should consider that into your Cost/Benefit analysis.

A used Xeon E3 V5/V6 + Supermicro Mobo + RAM might be within reach for you, depending on local availability, and would be a good candidate if you’re trying to go more modern.
It’s Skylake / Kabylake respectively and you’ll get IPMI Management too. Some CPUs of that kind have iGPUs.

Absolutely correct, especially if you won’t get ECC RAM.


Ah yes, I was planning to add vdevs if I needed more. At that point, I’d be knowing my requirements and have a more concrete idea about the final setup.

Yup, still thinking about how, but i might just use backblaze b2 and some USB hard drives. At least as of now, the data I’ll store won’t be critical in any way.

Sadly in Japan, used enterprise/SOHO gear market is basically non-existent. Or I don’t have access to them.

So here’s the part that I struggle to understand.
If the data gets silently corrupted due to the lack of ECC RAM (like between write operation request and checksum calculation), is there any reasonable way to detect the corruption? (I assume no)
If no, then I just might not have the right backup for some data corrupted 2 years ago.

What is, if any, a reasonable way to counter silent data corruption in a scenario described as above? I guess that if the answer is no, I’ll have to bite the bullet and go ECC.

1 Like

This provides good context:


There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem… I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.

- Ars walkthrough: Using the ZFS next-gen filesystem on Linux - Ars Technica OpenForum


I consider it “nice to have”, but wouldn’t downgrade other components to get it. This is for a personal home use case.

I can highly recommend getting those AsRock Rack boards however. I’m running with an X570D4U-2L2T and it really is a great foundation for everything you want in a homeserver/NAS. With the IPMI, you also won’t need a GPU. Modern Ryzen also allows for very low power consumption. M.2 for e.g. L2ARC is invaluable in my opinion.

Good thing about memory is that you can upgrade later very easy. Compromise on getting less ECC memory initially and buying additional stick(s) later might be an option too.

1 Like

Thanks, those 2 links made me really clear about how it works.

After searching about the ZFS_DEBUG_MODIFY flag, I found this ixsystems ticket that says they’ve disabled the DEBUG flag. I guess I’ll go ECC then.

That board (X570D4U-2L2T) is gorgeous! But in all fairness, I can’t justify its cost.

The price delta between it and a used B450M/new A520M is 600$+ (!), and the retailers don’t even have stock. Last gen X470D4U looks nice too, but the price delta is still 200$+, without stock.

GPU isn’t a big deal, I can buy a used gt430 for 10$ and set it in the x1 slot.

I don’t think I’ll need IPMI anytime soon, this isn’t critical data and I don’t have personal cloud servers. All the data I share between my phone and main rig is done through syncthing, so no need to worry about sudden connection failure.

If I really need any of its features badly, I can always buy it and switch the installed motherboard.

Yup, definitely gonna do that. Hopefully used ddr4 ecc ram’s price will drop when I need some more.

Bought mine for 420€. And there is a variant without 10Gbit NICs that is ~120€ cheaper. I didn’t check prices lately as I’ve bought all my stuff half a year ago. I went on-board 10Gbit because it doesn’t cost a slot and I wanted 10Gbit anyway.

ECC is remarkably stable in prices compared to non-ECC. And UDIMMs sadly are the more exotic kind of memory. RDIMMs see much more quantity in the second-hand market. But of course we’ll see a lot of used ECC DDR4 in the future as well as lower retail prices. I recommend Kingston Server Premier line of sticks. They work very well in Asrock Rack boards, most of forum users in those threads use them.

No problem in sticking to your budget, but keep an eye on expansion capabilities. Nothing worse than too few SATA/M.2 ports, no ECC capability or lack of slots. I started with 4 drives and a 1TB of NVMe. Now I got 6 drives with two NVMe and a cheap SATA SSD for boot. Always good to have options and not having to replace CPU+board+memory just because the platform doesn’t support it.


Interesting read. Am doing a similar build myself, and seeing this thread just makes me go back and forth on my decision to stick with the non-ECC ram I have on hand. :smiley:

Yup, that’s why I was hesitant with older xeons.
I’m planning to add an LSI card if needed, which will get me 8-16 SATA ports, filling the 8 3.5 inch drive slots of the case - which should be more than enough for expandability.

I may want to direct connect 10GbE to my main rig one day, but that’ll cost me a couple of ConnectX-3s and a cable. Maybe one day, we’ll see.
At that point I might want a better motherboard that can do x8x8 split, but I can always sell the current one and buy a new one. Or a 3.0x4 of a B550 might be enough.

Missed that part initially, and they are indeed the more affordable ones! Thanks!

Damn, AM4 is such a nice platform!

1 Like

Just a quick update: one local retailer has listed X470D4U for approx 350$, but needed to ship from the manufacturer.
I asked them how long it will take to ship or if they even had stock, and the answer was no, and no.
I guess I’ll go with an A520 system.

1 Like

AM4 B450 might be interesting as well (comparing to A520) - they should be same or similarly priced.


Storing replaceablemovies and tv shows:No
Storing projects, work, family pictures stored no where else:yes
And as always, raid is not a backup


What he said ^^^^

Especially if the drives will be active most of the time and being checked regularly etc.

Here in Japan, it “was” interesting.
Basically all mATX boards are sold out, and the price for a used one is comparable to a new A520.

It’s sad being in a small market.

1 Like

I think it’s like that everywhere. I was looking at B450 availability a couple months ago, and it was basically just very expensive used boards. B550 was cheaper, but needed 450 to keep using my 2600.

For a NAS, I’d actually say a higher end motherboard makes more sense than a higher end CPU, if you aren’t looking at a small form factor system. X470/X570 can have more SATA ports, and more PCIE lanes available for SATA/SAS HBAs with less conflicts when using NVME to boot from.

For ECC, getting a DDR3 ECC compatible platform and some ECC DDR3 might be worlds cheaper and not much worse in performance.


While it can be run without ECC, from my (recent) past experience … YOU WANT IT!

My server started to throw a bunch of memory errors which corrupted my cache and theirfore corrupted my drives.

A small investment now can save you a lot of headache later. As always, yes make backups, but if you get ram errors from a bad stick like I did, you’ll be spending a lot of time trouble shooting and restoring from backup.

I encourage you to read the below from someone who knows a shit tonn more than any of us about this topic:

and there is also the other Linus …

If you are super worried about the cost, AMD builds in support for ECC Ram on consumer parts. The catch is that it is not validated so basically “use at your own perril”. IMO its better than paying intel extra for something that it should have anyway, and you save a buck …

Hope this give’s another perspective.


Registered dimms are very cheap. I’m not sure how much memory you are looking for.

I would personally consider LGA2011 such as HP Z420. Motherboard roughly $40 and processor $6-40 4 core to 12-core. May need a simple adaptor because the standby power is 12v, not 5v. I made a boost circuit. You can also power the standby with a DC wall adapter. The power adapters are commercially available.

I would start with a single 16gb 1600mhz for $15, or 2x 16gb for $30, ebay com/itm/ 194727518314

Depends on what you are doing. Lenovo S30 motherboard looks interesting too.


supermicro x10sll-f About $60 on eBay
Xeon e3-1200 series V3 CPU about $40 on eBay
8GB unbuffered ECC memory about $25 on eBay

You can get decent specs and ECC support for half the cost you think…

I run the same as above except 32GB memory with TrueNAS core before and it never missed a beat