All-Flash NAS Build Sanity Check

Hi folks! First-time poster here. I’ve recently been looking into aggregating all my assorted external HDDs into a single “portable” unit. Unfortunately I have to admit I fell victim to “scope creep” and have ended up with the current theoretical build. I’ve read that the HBA I plan to use requires a UMB after a specific FW version and many other small gotchas along with it. I was hoping all the helpful people on here could have a gander and see if I made any obvious mistakes. The list is as follows;

-16x Kioxia CD6 15.36TB Drives (+2 in cold-spares)
-3x IcyDock MB118VP-B Drive Bays
-AsRock Rack W680D4U-2L2T
-Intel i7-12700
-2x HighPoint Rocket 1580 / Broadcom P411W-32P

So current concerns are with the HBAs. Motherboard requirements (iGPU for transcoding/VMs with TrueNAS Scale, 10GbE, mini-ITX/micro-ATX, ECC support, 2x PCIe 4.0) mean that one of the HBAs will have to work on PCIe x8 :frowning: .

I’m guessing IcyDock isn’t a UMB so there’s concerns with that. Also with S.M.A.R.T. reporting and ZFS. Along with the PCIe bus errors that people have been getting. I appreciate there might not be a real answer to this.

Plan is to have 2 8-wide vdevs with RaidZ2 (or 3 6-wide vdevs, not quite sure atm)

Oh, and I plan to build this into a Pelican case :confused:


What can I say, I like tactile switches :slight_smile:

1 Like

Do you already have the motherboard/cpu? If not you could go for something like the ASRock Rack ROMED6U-2L2T. Obviously you’re then substituting lots of cores/PCIE lanes for single-core performance, but for a NVME storage server it seems worth it?

1 Like

You might have some trouble getting that sheet metal bent like that, most places will request that a base be atleast twice as wide as a flange if it’s flanked by a flange on two sides.

basically the green line needs to be twice as long as the red lines:

1 Like

+1 on the AMD switch, with a twist: I’d suggest looking into AMD EPYC GENOA. Not cheap, fair warning about that.

Suggestions include:
9124 CPU
9354P CPU

Asrock Rack GENOAD8UD-2T/X550

Tyan Tomcat HX S8050 (S8050GM4NE-2T)

As said before, not exactly cheap stuff. But man, imagine the PCIe lanes! And bandwidth! :exploding_head:

1 Like

Thanks for the suggestion! I admit I didn’t consider that

Hi, I did consider AMD but the integrated GPU is sort of a must unfortunately Edit: although now with the abundance of the x16 slots, it is an interesting idea

Kioxia CD6 are pretty old at this point. We’re at CD8 for the latest generation. Unless you get them for real cheap, I’d rather get CD8 or Micron 7000/9000 series. CD6 is 40k IOPS, that’s SATA SSD level of write IOPS.

And why do you want all-flash? Because it’s faster? With an 8-wide RAIDZ, you leave a lot (most) of IOPS on the table. Paired with comparably slow CD6s, that’s certainly not what I would expect from an all-Flash system. You only use 8-wide RAIDZ when you predominantly want capacity and not performance.

200TB raw capacity? → HDDs are the way to go. 6x HDD will saturate your 10Gbit networking. Also saves like 10k bucks.

And there is no way to fit 16x NVMe into a consumer board. Needs a server board with lanes. See recommendations above. With that PCIe switch, you can fit more per slot but they still share bandwidth.
8x NVMe (PCIe switch on a x16 slot) + M.2 slots that come with the board is the most you can get out of consumer boards.

But if you use all lanes for NVMe, you don’t have any slots left for networking (25G/100G)

1 Like

I just found out about that constraint last week because I’m trying to make my own custom case as well. The big manufactures can get away with bends like that because they have custom tooling, but all the low volume bends rely on bog standard metal brakes which aren’t as versatile.

Can’t you use the SFF-8654 to x8 U.3 cables to plug 8 NVME drives into each SFF-8654 port on a good Raid/HBA card? Microchip’s big cards are supposed to support this, they can run each NVME drive at x1 PCIe link rate.

1 Like

CD6s are pretty old, my issue with HDDs is the “drop-resistance?”. This is meant to survive bangs and similar physical disturbances. I’ve had two drives fail due to this so I’m a bit biased at the moment. The form factor is pretty essential too, and also I don’t think I can find a way to sustain PCIe 5.0 speeds tbh. All very good points, lots to think about.

Edit: Yep, not enough PCIe lanes, don’t know how I missed that lol, I guess always helps to get more experienced people to have a look. Looks like I have to figure out a way to get some Epyc action going and have a more robust PCIe mounting solution.

Perhaps 3x 6-wide is a better way to go then for IOPS’ sake

Are you limited to mATX? If not, there are lots of great 2nd-gen EPYC/motherboard/memory bundles over on the 'bay. I do think most of them are ATX though. I’m just about to make an all-flash NAS with an EPYC 7532, a Tyan S8030 motherboard, and 5x NVME 7.68T drives.

FYI: earlier this year I managed to purchase a Supermicro H11SSL-i board, EPYC 7551P CPU (current price not reflective of what I paid!) and 2U SP3 cooler with fan for under 500€ from Aliexpress. Another 300€ provided me with 4x 32GB RDIMM ECC RAM from Samsung from a ‘local’ (relatively) supplier.

Board and CPU are 1st gen EPYC so PCIe 3. Still plenty fast with 3.5GB/s transfer speeds. PCIe lanes aplenty: 3x 16x slot, 3 more 8x slots. A “cheap” HBA-like solution could be a PCIe 16x to 4x M.2 adapter, then each M.2 slot carries a M.2 to up to 6x SATA port. All available from Aliexpress.

PCIE to 4x M.2 adapter
M.2 to 6x SATA adapter
Just make sure the 16x slot has 4x4x4x4 bifurcation enabled :slight_smile:

[edit: there’s a M.2 to U.2/SFF8643 adapter ← link!)

Unfortunately the largest I could design around is an mATX board

This might interest you? link
(I have one, very compact board. May not have sufficient PCIe lanes though)

If you’re set on NVME then I would get that ASRock Rack Rome mATX board. Gobs of PCIE gen4 lanes, IPMI, 10GbE built in. I would definitely make sure that the IcyDock bays are able to get enough air flow out the back, otherwise your NVMEs will COOK.

You’ve got more motherboard flexibility if you want to go with SATA/SAS SSDs, since those don’t require nearly as much PCIE bandwidth.

1 Like

That’s true, one aspect is the cabling obstructing the airflow with this amount of clearance, good point!

I own a Broadcom P411W-32P. It’s a nice card despite the quirks that come with it. You are right about the Icydock enclosures not qualifying as UMBs. I’ve resorted to using older firmware in order for all of my direct attached devices to be usable.
The Icydock enclosures are not required for SSD connectivity or functionality.
You could grab m.2 SSDs for the sake of cost (bear in mind the lack of PLP when using consumer drives).
There are cables out there that allow you to connect up to 32 devices per card. If you go that route, you’ll be limited to pcie x16 slot for bandwidth though.

Here’s a reputable site for cheap storage: NVME SSDs on ServerPartDeals

Several folks are right on the money for suggesting AsRock Rack microATX server boards. +1

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.