Advice on TrueNAS Scale build (W680 vs LGA1700, ECC vs non-ECC)

I have been planning to build a server with TrueNAS Scale for over a year but can’t decide on what route I want to go. I have the drives, a psu and a mid-tower case, and some old DDR4 3600 non-ECC memory.

I want to mostly use the server for storage, but also run Jellyfin, docker, software for recording security cameras. And a web-torrent client.

The main two platforms I was looking at was LGA1700 and W680. W680 means there’s a lot of boards with 10gbe ethernet +2.5gb which would be nice but I’m not planning on going past 2.5gbe anytime soon. IPMI is also nice, but I heard that a lot of boards with IPMI mean that you cant use the iGPU for hw accel encoding or decoding which is a deal-breaker. IPMI is also not all that important for me when PiKVM & Blikvm exists, and draws less power at idle. (I don’t need the full IPMI features). So… Unless ECC is required, is W680 even worth it? The boards are expensive and hard to find.

I have 32GB of DDR4 3600 that I can use after-all. The TrueNAS forums have lots of people stating that ECC is practically required, but then people here say its more for people who are paranoid about dataloss and the chances of having bits flipped are very very rare and its only necessary for business critical data. ZFS Checksums after-all.

If I don’t need ECC ram, then I can’t see any reason to go for a z690 ddr4 board with iGPU support for HW Encoding off quicksync. Probably going for a 12400(no ECC), 12500(Has ECC) or 13600K(I will update bios). However I need 8xsata, this means my only options for z690 is AsRock. However AsRock only has “Killer” LAN, and not Intel. Hearing that Intel works better on TrueNas, is that true? If so, I could go for a board with less sata drives and use an LSI sata card.

I’m also considering waiting for Meteorlake for AV1 encode support on quicksync. Would like some help in making a decision here, should I be looking at something completely different?

You could go ddr5 with it semi ecc, i went that route

MSI MPG Z690 FORCE WIFI Gaming

300€ ish

Why this Board? Dual pcie 5 16x Slot when both populated 8x8x (my rtx3060 for AI and Headless steam, HBA Pcie 8x)

In your case you can do what you want

Dual pcie 5 8x could be anything you want :slight_smile:

Pcie 3 16x Slot with 4x lanes (10gbit Mellanox)

4x m.2 pcie 4
6x Sata

4x DDR5 slots

2.5gbit lan

I Went for the 13700k but any 13th gen with igpu will do fine for transcoding.

Someone told me in the forum somewhat recently about DDR5 and made me watch a video by Dr. Cutress - that DDR5 RAM doesnt have a proper ECC because to my understanding proper ECC should cover the whole data integritiy chain: CPU, RAM and to some extent, the motherboard as well.

  • Your CPU should have a formal ECC support, like the Ryzen PRO 4000 CPU line.
  • Your mobo (not just the chipset) should have ECC support, like ASRock. IIRC the RAM traces had proper ECC circuitry to protect information as it traverses back and forth from the CPU to the RAM.
  • Finally your RAM as well should be DDR4 with ECC support.

DDR5 has to have ECC built in because the speed in which it operates tends to introduces errors and latency issues.

1 Like

which is nonsense. ECC is always better, but certainly not a requirement. If realizing that integrity is a thing you want to emphasize on in the future, ECC is the logical choice.

Bit flips in memory may be rare, but so are bad blocks/checksums and bit rot. And if memory sends wrong data, your ZFS checksums are now verified,checksummed and trusted corruption. It makes sense to cover the entire pipeline. If you care about your data, you use ECC.

An HBA will expand SATA ports by up to 24. HBA is generally recommended by most people over on-board SATA. I use both because redundancy.

Why would this be the case? First time I heard this.

I’d go so far to say AMD is more performant on TrueNAS. Typical ZFS workloads like lz4 and ZSTD compression favor AMD, as seen in numerous benchmarks.

Hmm, I didn’t realize that HBA was worth using over on-board. When you say redundancy do you mean that you would put each drive in a mirror on each? E.g two drives in a mirror, one on on-board and one on HBA, so that if the HBA card dies all the drives are still accessible?

This was me talking about the Ethernet NIC, not CPU. I was asking if Realtek OR Killer 2.5Gbe NICs worked well with TrueNas or not.

In serverland, no one uses on-board SATA. That’s mostly a consumer thing. I personally never had trouble with my SATA controllers, but apparently there are different kinds of on-board controllers and some models can cause problems. “You’re probably fine”.
And yes, I use both HBA and on-board for my HDD bays. One breakout cable from the HBA and the other one to on-board SATA. If either the HBA or on-board SATA freaks out, I don’t lose all drives, just one side of the mirror.
This usually isn’t necessary and most setups don’t have multiple controllers, but avoiding single point of failure if you have the option…I take it.

And I needed the HBA anyway. 8x SATA ports are great, but if you have 8xHDD + mirrored boot drive + ZFS special vdev, on-board just isn’t up for the task.

TrueNAS isn’t an OS. TrueNAS Core runs on FreeBSD and TrueNAS Scale runs on Debian 5.14ish (?). So check hardware compatibility for these OS.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.