2 x 10GB SFP+ NAS Project

I understand this is a NAS project and not a PC per se, so please feel free to relocate this if in the wrong category.

I’m revamping my home lab and network. Last weekend, I zapped my ESXi hosts and rebuilt them on Proxmox. I removed the RAID cards from each host and installed 10GB SFP+ cards:

I have a 12 Port SFP+ Microtik CRS309-1G-8S+in switch on the way and intend to hook everything up to that for end-to-end 10GB throughput. Each host currently has internal storage on SSD, but I’d like to take a shot at a a shared storage unit with NVMe drives running ZFS RAIDZ2.

My project would need to satisfy the following points:

-Decent CPU with built in graphics (So I can use the x16 PCIe slot for NVMe)

-Minimum of four NVMe drives, but would love to have six if possible as one large RAID array. (Will depend on number of PCI lanes available, of course. )

-2 x SFP+ 10GB connectors on the NIC card.

-Quiet. Fanless if at all possible, but if not, then BeQuiet! fans.

-Small form factor. ( My home lab is behind the TV to keep the missus happy. Literally out of sight, out of mind.) 8" is the max width. There is some wiggle room there, but not much.

Anyone been down this road before? Any gotchas?

no, sorry this one has the 10gb NICs

2 Likes

Recommend AM5 cpu on B650E motherboard of choice that works with ECC memory.

I have Asus B650E combined with 7950x and 6000 MHz ECC performs really well all the way up to connectx-4 with 25Gbit/s. Don’t know the switch.

I recommend NVME Gen 5 for NAS OS frequent files before Backup, not using ZFS (other fast file format such as EXT4). then separate ZFS storage NVME for BU (inside the same motherboard =NAS system), because ZFS is slower than 10Gbit/s. The B650E has 2 Gen5 NVME plus 2 Gen4 NVME. And if its a true NAS, you can add two more on the x8 PCIE slot left after SFP+ card. Totaling 6 NVME, whereof 4 Gen 5 and 2 are Gen4.
ZFS is often 300-400 MByte/s file transfer even with good hardware using stripe, mirror or Raid10.

If looking at cheaper CPUs avoid too old and target at least 5 Ghz when boosting, which is readily available on AM5 plattform.

1 Like

BTW: RaidZ and RaidZ2 are slower than Raid10 and mirror, not by a lot, but enough it annoyed me and restructured the ZFS raid to RAID10 and another as Mirror.

There’s a gen 2 version of Flashstor coming soon. It has a Ryzen Embedded CPU with ECC support, dual 10G ports (but likely not SFP+), and PCIe 4 (at least on some of the ports).

3 Likes

For NAS, the Asustor Flashstor 12 Pro might fit the bill. That thing is about as large as the Playstation 2 in dimensions and fits up to 12 NVMe drives, though the CPU is not that strong and you might need something beefier for a streaming PC.

Combining that with a small ITX compute server in something like the InWin Chopin (or even better, something like the 4.3 liter J-Hack Pure X) could take you a long way. If you really want to go crazy though, you would probably need to invest in some CAD and 3D printing.

1 Like

8S+ means 8 port, which as an owner of the switch, I can confirm it has only 8 SFP+ ports. :wink:

1 Like

Correct! I had another switch up on my screen as I typed. Eight is still more than enough.

1 Like

I looked at the released version of the Flashstor and size wise it was perfect, but the single 10GB didn’t do it for me. ServeTheHome did a review. The limited number of PCI lanes and the single 10GB NIC limit the performance of the drives. If there is a bottleneck, I want it to be the network.

The newer unit looks better, but with only 20 lanes and 6 drives, there will be a deficit. Single lane gives 2GB/s roughly. I’ve been looking for a x16 PCIe 4.0 NVMe RAID card, but they are thin on the ground and many not available for direct purchase. Most only have four slots which makes sense.

The SFP+ choice was made as DAC cables are cheap, rugged (I have cats) and perform as good as fiber at 3M or less. Cost was a factor as well. For the price of one decent RJ45 to SFP+ adapter, I can get four DAC cables.

I could get another nine bay QNAP NAS for roughly $600 with the 2 x 10GB SFP, but they would use SSD rather than NVMe. I use one of these for my home entertainment NAS and it works well.

image

1 Like

That’s double bandwidth of the 10gb network leaving the device. The single lane connection is not posing a bottleneck to the device’s main function of a NAS (network attached storage).

But admittedly, I have the same gut reaction: this device’s architecture is very much a compromise.

Maybe in your idea of what a smb Nas should do … 1GB/s worth of sustained network bandwidth for smb/home use will never be used by normal workloads unless you are testing specifically for that…

If you want more network/nvme performance you need more PCI lanes than what an AM5 or Intel consumer platform can provide and your requirement for it being sff/silent does not allow for a lot of wiggle room…

1 Like

completely unrelated, but did you ever finish the Kallax case project? wondering if you have the 3d stuff available for purchase or if i could commission it from you? thanks <3

Hi, tak a look at this mobo:

https://www.asrockrack.com/general/productdetail.asp?Model=B650D4U3-2L2Q/BCM#Specifications

in pci x16 slot put Asus Hyper m.2 card with 4 nvme drives, another nvme put in pci x4 via adapter and last nvme put in m.2 slots.
In topic of conectivity mobo had 25 gbe SFP28 ports which are backwards comatible with SFP+ and can run in 10 GB spped.

1 Like

Nah, been so busy with life in general. What I do have is a bit of HW mockups, a backplate and some ideas of how to make a robust case using only a 10x10x10 cm 3D printer but it is still very early alpha stuff and nothing has even been printed yet. Oh and I want the damn front grill to look nice…

One of these days, maybe. If you know CAD and/or 3D printing I could give a rundown presentation, I use FreeCAD for this since, well, just a hobby, is free and works more or less flawlessly in Linux, but… :slight_smile:

Perfectly valid observation. The norm seems to be 20 PCIe lanes for the mid-range CPU.

After research, the Gen 2 AsusStor Flashstor coming out Q3 does seem to have what I’m looking for. 2 x 10GB with 12 NVMe and ECC RAM. Worth waiting a few months to see if it lives up to the hype.

2 Likes

Even given only 1 pcie line, NVME drive still works. 12 NVME can be implemented with 12 pcie x1 lines.

This annoyed me to no end while looking into faster storage.

“Yeah, we can do NVMe” and then you get PCIe3 4x split across everything.
2x 10Gbit NIC + 4x mid-tier M.2-SSDs will be severely bottlenecked in that scenario, and I hate it.

4GB/s may sound like much, but a single Kingston Fury Renegade will saturate that (until it throttles for various reasons).

Quick update. Got the 2 x 10GB connections up on both PROXMOX hosts and connected to the MikroTik switch with 802.3ad on bonded connections. Regardless of MTU 1500 or 9000, performance is exactly the same to the MB.

Now I just need the NAS.

image

Your results are far from 10Gb. Make sure the connection is working at 10Gb mode.
You should expect around 9000 Mbits/sec for 10Gb connection.

1 Like

Good catch!