Home NAS/ Truenas scale build

Hey everyone! Longtime lurker looking to build a nas. I currently have been running a Truenas scale system for about a year using a dell t1700.

Current system specs:
-Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz
-2x 8gb dd3 ECC
-2x 14tb HDD pool
-2x 2tb SSD Samsung 860 evo (apps pool)
-2x 256gb boot SSD
-LSI 9207-8i / SAS2308_2(D1) HBA
-Dell case (only 2 3.5 bays)
-Dell motherboard
65w idle power

My use case is storage of video, photo and documents, Jellyfin, Nextcloud, qbittorrent. I do edit 4k multicam videos, however I can keep active projects on my workstation and use the nas for archival storage once projects are complete. The cost to build a nas to edit native 4k is not worth it for me. I do not use VM’s.

My current limitations with my current nas are
-not enough drive bays for expansion
-no IPMI, and no iGPU so I can not get hw accelerated transcoding. This system can not handle a single 4k transcode and nextcloud also can not play any of my videos
-Dell proprietary motherboard and connectors so I can not do a simple case transplant

Future wants
-SFP+ 10gig card
-GPU accelerated transcoding
-6 to 8 3.5 bays
-similar low power consumption

I was looking at Alder lake, am4, am5, x12, and EPYC platforms.

ASUS Pro WS W680-ACE $329
Intel 12100 $100
$429

EPYC 7302P
Supermicro H11SSL-i
32gb ram
$432

Asrock X470D4U (am4) $270
5650g pro $180
$450

AsRock Rack B650D4U $365
CPU ryzen 7600x $219
$584

x12
SUPERMICRO MBD-X12STH-LN4F $379
E-2324G $235
$614

Additional parts for build:
Rosewill rsv-r4000u 8 bay chassis $180
Corsair RM550x PSU $150
SFN5122F NIC - pcie2x8, $20

I was leaning towards consumer platforms until I saw some ebay listing for used EPYC mobo, cpu and ram that makes it the cheapest option and also has like 4x the pcie expansion of all the other options. Let me know what you think!

ASUS ProArt X670E-CREATOR + CPU + Intel Arc A380 card ?
It also does ECC which you probably want, the 10G is Marvell so it is what it is but it’ll probably be fine in your use case if you can live without SPF+.
Unless you need a rack case look for a Fractal Design XL case instead, it’ll work just fine for your needs.
150$ for a 550w PSU is uhm… not a good deal, this is actually a very decent PSU with the mail-in rebate applied.

https://www.cybenetics.com/d/cybenetics_GB6.pdf (test data)

You can likely reuse your LSI card but you might want to look into something a bit newer and power efficient, ASM1166 might be of interest if you can live with slightly lower throughput with spinning rust and all drives fully utilized.

The NIC you linked to uses a rather ancient ethernet controller so I’d recommend that you look for something else.

I have tried the W680 (with 13600K), X670 (with 7900X), and finally settled on a Zen2 EPYC with a PCIE Gen4 board. Ultimately went with the EPYC for the PCIE lanes because I wanted to build an all-flash NAS and U.2 drives were the best $/TB. As you have found, the deals on used Zen1/Zen2 EPYCs are pretty tough to beat right now. I am happy with my choice, but just a word of caution if low power consumption is super important to you.

In my experience (n=1), EPYC systems use more idle power than their consumer counterparts. For example, my EPYC 7532/Tyan S8030 with 128GB DDR4 2666 idles right around 50W, compared to 40ish for 7900X/X670 and 64GB, and ~27W for a 13600K/W680 and 64GB. These are readings from the wall with a single M.2 boot drive for testing. The increased idle draw and massively reduced single core performance was worth it for me because I traded spinning hard drives for U.2 NVME and ended up with more or less the same power usage +/- 10% or so. And as soon as you ask the machine to, you know, do something, power usage on all of the platforms jumps.

On the plus side from a power consumption standpoint with EPYC, you wouldn’t need a HBA, since most EPYC boards will support > 8 SATA drives out of the box.

1 Like

As much as Epyc sounds cooler you also need to keep in mind that IPC is a lot higher on later generations of Zen. While Passmark might not be optimal it shows you quite interesting results in that regard, AMD Ryzen 9 7900 vs AMD EPYC 7282 vs AMD EPYC 7532 vs AMD Ryzen 9 7950X [cpubenchmark.net] by PassMark Software

Excellent point–there is no question that Zen 3 and Zen 4 crush Zen 2 parts, and it scales with clock speed so that the consumer parts really trash EPYCs in single-core performance. For my specific use case I subbed single-core performance for multi-core and (more importantly) PCIE lanes. I also could have gone with the W680 and a NVME card with a PCIE switch built in, but I definitely did get sucked in a little bit by ooooh, enterprise gear…

Dizzy,
I think the Marvell AQtion 10Gb ethernet might not be amazing on Truenas compared to an older intel or solarflare sfp+ nic, and the x670E is $70 more than the one I listed. I think the power consumption would be similar on 10gig copper compared to an older SFP+ fiber nic, right?
That PSU looks great, and good point with the case, the fractal one has more bays for less money.
I looked into the ASM1166 and it seems to not be recommended compared to a LSI card. I will look at some newer LSI cards and see what is available and what the power consumption would be. I got this one for like $20 with cables and it is great. Some of the motherboards I listed actually have 8 sata ports so the HBA would not be needed.

Adman,
50w idle is not that bad for EPYC… My current xeon system is around 35 watts with no HDD or add in cards. I do think that the lowest power consumption might actually be on the x470d4u with an APU. Without having to have a GPU or HBA in the pcie slot, I could just find a low power sfp+ nic. That is probably the lowest power option with ECC support and sfp+. EPYC would maybe be like 30w more at idle. Realistically if I got that platform, I would probably add in more GPU’s so idle power would be higher. My electric rate is 0.18USD/kWh so not too bad.

TrueNAS Core (FreeBSD) does not have a driver, Linux (SCALE) should have I guess. I have that exact mainboard myself and it runs great on 14.0-STABLE but I don’t use the Marvell NIC or have any intention to so it’s not an issue for me. What makes that board nice it that it supports 8x 8x 2x on the PCIe slots which can be useful in your case compared to the much more common 16x 4x (2x) and ECC which isn’t very common at all. fwiw, I guess you’re in NA but in most EU you also got 5y warranty for free with the ProArt series, that promotion doesn’t seem to available in NA as far as I can tell.

No idea about power consumption but I would assume it’s likely similar.

I’m not sure where you looked but the ASM1166 controller works great on my RockPro64 with FreeBSD 13.2-RELEASE and Unraid community have reported it works fine. The idea here is to get make use of all PCIe slots without running into weird issues as one is occupied by a video card (8x), NIC (8x seems to be the norm for now for 10Gbe SPF+, this will likely change with PCIe 4 and 5 support) and finally some kind of HBA to expand the 4 SATA slots available.

Yeah, I expected idle to be higher, frankly. I was pleased that I was able to trade my spinners for NVME at slightly lower power, even though the NVME drives I bought still idle at almost 8W(!!) each.

If you’re fine with a single SFP+ NIC, the Connect-X 3 CX311A is really power efficient. Not sure about the support on FreeBSD, but it’s great on Linux.

Mellanox in general have good support in FreeBSD

1 Like

That one looks perfect, since it is pcie3, it would work fine on pcie3x4 slot like in my gaming computer!

1 Like

Otherwise I see a lot of praise for Chelsio cards which also are quite efficient and don’t cost an ARM and a leg.

Example Chelsio Dual Port T520-CR 10GbE Ethernet Unified Wire Adapter | eBay (no idea about the seller etc)

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.