40+TB all Flash Homeserver SATA or u.2/3 and how to connect

Hello,

I started building a new homeserver a few months ago.
I am using:
Ryzen 5950X
AsrockRack X570D4U 2L2T
Arctic Liquid Freezer II 360 AIO with 3 Phanteks D30
Sliger 4U 20" Rack Case
4x32GB 3200MHz ECC RAM
Corsair SF750 PSU

So pretty much all that is missing is some storage.
My current homeserver is running on 4 18TB Exos drives and I have 3 different backup systems.
I would like to run this new homeserver on flash storage only.
The unfortunate thing is that I need a lot of storage. I´d like to have at least 40TB of storage, more if possible. Since the board has 8 SATA Ports I was thinking about either using 5-6x 8TB SATA SSDs.
Unfortunately they are still very expensive, the only cheap one is the Samsung 870QVO QLC, and a bit more expensive some Samsung OEM without warrenty for endusers. Going for 4TB drives would be cheaper, but I´d have to go with like 12-14 drives and use RAIDz2 instead of RAID 0 on teh fewer drives array, which as far as I know makes performance much worse.
The other method that might be a bit cheaper is going for u.2/u.3 drives. There are some that are cheaper than the sata drives and u.2/3 drives are also sometimes pretty cheap on ebay.

Unfortunately I dont really know how to connect those to a non server backplane system.
As far as I know each drive takes 4 PCIe lanes. Since I pretty much can only use one x16 Slot, maybe an aditional one using a m.2 Slot for 5 drives combined it might be hard to connect enough drives to the system.

Do you guys have any recommendation for which way to go and how to save a $/€ and still reach my goal and if going with u.3 which “HBA” kinda card to use.

Choosing ssd instead of hdd will not be cheap, if you already have a limited budget, think about whether it’s worth it.

How many sata ports do you currently have available? You can always buy a card with sata ports, or look at a used hpa server card with the appropriate cables, these usually support quite a large number of disks.

First, I think your reasoning is sound (all-flash storage home server is fast and power efficient).

The SATA based flash is generally a good choice for home servers as prices keep dropping at least for drives with capacities <=4TB. These are easily added to existing cases or can be put into hot-swappable enclosures. The con is that the total cost for the capacity you’re looking for is quite a bit higher than spinning rust and the number of drives required to reach the capacity is non-trivial (they either don’t fit on mobo sata ports or the required drives (8TB+) are even more expensive).

U2/3 or m.2 form factor NVMe drives are the future because they are simply so much faster than SATA based drives. They are only limited by the available PCIe lanes available in the system. Older models (PCIe Gen3 based) and used enterprise drives have come down in price so much that they complete with SATA drives on the larger capacity front (>= 4TB). A system with the requirement of using 40+ TB of NVMe storage needs to be planned around the PCIe needs to connect all of these drives. Trying to add them to an existing mobo will require very expensive “HBA” style cards. IMHO you don’t need HBAs but PLX cards.
The AsrockRack X570D4U 2L2T, while generally a great mobo, is not a great choice when considering adding 6+ NVMe drives (e.g. 6x 8TB in RAID5/RAIDZ1).

Now you need to make some hard choices. Keep your existing HW and invest in SATA SSDs for speed and power savings, but at significant higher cost compared to similar capacity of HDDs. The investment in SATA SSDs (compared to NVMe) feels like a dead-end technology to me at this point. It’s not really an investment into the future. If it’s not, you may as well use old-school HDD technology at a fraction of the cost.

Trying to make a NVMe-based 40TB storage system work will likely require you to invest in another mobo. I am not aware of (non-janky, meaning 16xPCIe to 8x m.2 slot with 8x m.2 to U2 cables) add-on cards that can supply sufficient m.2 slots or U.2 connectivity for 6+ 8TB drives. You’re looking at Threadripper/EPYC from AMD or Xeon workstation/HEDT platforms that are either old or very expensive (or both).

1 Like

Earlier this year I managed to grab a Supermicro H11SSL-i board and EPYC 7551P CPU for just about €500, add a SP3 cooler for another 20 or so (all from Aliexpress), reuse your ECC RAM (provided it’s R-DIMM, unbuffered stuff won’t work) and you’re good to go. Plenty of SATA and PCIe ports (all of which have bifurcation!). Depending on your needs you may want to shop for a lower core-count CPU (w/ higher clock speeds?).

I also got a 2TB Gen3 NVMe drive (Chinese brand) for about €110, prices have dropped even more since. Currently working on a proof-of-concept (for myself really, I know it works anyway) for a 4x M.2 to PCIe 16x adapter with 1TB drives (I have 2x Lexar NM620, need to get more drives later, when budget permits) as a RAID6 for a self-served (semi-private) photo-hosting site.

You need a chassis with NVMe native backplanes and server boards with sufficient lanes. Filling e.g. 24 2.5" hotswap bays is very much OEM exclusive tech at this point. Supermicro only sells this stuff as complete systems, not barebones.
For smaller pools with 4-8 U.2, with a lot of pain, research and money, you can probably make something fly on a DIY basis.

I share your view on SATA. In 2023 it sits in a very dissatisfying position. You only use them because you don’t have the lanes and/ they fit into 3.5" bays, not because they have advantages not even regarding price. So it’s NVMe or HDD for most people. Until we get chassis for proper NVMe storage.
I could imagine a mini ITX case with 8x U.2 hotswap bays and 100T raw storage capacity and an undersized SFF EPYC board with 4x MiniSAS on board. And EPYC breaks the bank for most builds. It sadly will take years until we get these products, but first rack servers will enter ebay sooner.

Things like Tri-Mode adapters do this job. Broadcom 9500 series and equivalents. 4x NVMe on an x8 card obviously comes with performance drawbacks. Boards with MiniSAS/Oculink connector allow for on-board solution, but those are deep within serverland. Although AsRock Rack has a x570 variant with Oculink instead of M.2.

Then there is the cable problem. Most cables will only do PCIe 3.0. We have 100-page thread in the forums that often is about dealing with PCIe 4.0 NVme and compatible adapters and cables. Modern servers claim to have Gen5 backplanes and cabling running, but that’s irrelevant if you don’t want to spend 20k to buy these things.

Consumer boards have 24 lanes, or 28 if you feel generous. That’s 6x U.2 if you don’t need anything else. And you need a lot of adapters/HBA/whatever as no board comes with all lanes dedicated to MiniSAS/Oculink to just plug your SFF connector in.

This is intended market segmentation. If you want more lanes, buy fancy EPYC/Xeon board+cpu.

Pleb tier remains SATA HDD/SSD unless you invest a lot of money and use s.th. like IcyDock instead of traditional bays. Not even talking about cost of drives, which isn’t trivial as we all know.

I’m personally sticking to HDDs until I see a NVMe solution at a reasonable price point without going full server platform. Then I’ll let my HDDs be backup/archival-only.

1 Like

Theoretically speaking you can connect 6x 32TB NVMe enterprise-level SSDs to a consumer board. But expect a lot of questions when doing so. If you can spend the expected 5-6 figures on storage, you probably have the extra cash for server-level mobo+cpu.

I found used enterprise-level 16TB drives for just below $100/TB. Too steep for most homeserver applications.

I don’t care about questions. If you don’t need a PCIe NIC, GPU or anything else in your server, it will run fine and you saved 2000$ on hardware and a lot of power. If 6x NVMe is all you want. You still need HBA/PCIe-Switch, cabling and hotswap bays. Making it ~double the cost of going SATA before drives. You still end up with the same form factor, as ITX cases aren’t known for their 5.25" bays.

For anything >6xNVMe memory bandwidth and CPU actually become a bottleneck. Even with 6x PCIe 4.0 drives, that’s a potential of 40GB/s of user data. So at some point, limited lanes aren’t the only problem you face. We’ve seen Wendell choking the NVMes by just not having enough memory bandwidth. Memory+CPU and networking has to grow too, so we’re back to 10k+ rack server.

If I had 8x NVMe bays and the hardware to support it, I’d get those Micron 7450 Pro 4TB drives for 65$/TB. Prices crashed quite considerably. You can get new enterprise drives for cheaper than top of the line consumer drives. That alone makes U.2 interesting, because we all love cheap storage.
We were at 200$/TB a year ago. This isn’t a trivial matter.

Just to clarify - I wanted to point out the max capacity achievable with consumer boards along with the cost discrepancy between storage and the rest of the system when doing so.

Things are much more reasonable when using lower capacity devices. I have multiple nodes with 6 NVMe devices on consumer mobos.

Yes, the mobo can independently operate each NVMe device at peak performance. The challenge I have is assembling the devices efficiently into a single logical unit, e.g. using RAID or RAIDZ, without losing performance. Gen4 devices are so fast that consumer CPUs and RAM become the bottleneck on latency and bandwidth.

Amen.

While HDD capacity is mostly a linear increase, high-capacity Flash isn’t really following this >8TB.
Usually most people are totally fine with 4x PCIe 4.0 NVMe, especially with network being a limiting factor. But buying the 16TB drives is a lot of money in a single basket. But once this gets back to normal $/TB levels, I can totally see ITX cases with 4x 16TB U.2/3 NAS builds as we can do this with commodity hardware.

I think the consumers and industry need 2-3 years more before we see more straight-forward solutions, applicable for a broader audience.

I don’t think the hockey-stick curve is going to go away anytime, soon. The point of the “knee” will move.

These are exciting times for home labbers because prices fluctuate and change rapidly. Use cases depending on or benefitting from NVMe devices totally out of reach due to cost can become reasonable or at least tolerable over night.

40 TB NVMe with consumer products? Yeah, if we’re talking 10x4 TB NVMe drives, you can easily do it with these, costs around $300:

Slap in a 5600G for $150 plus 32 GB DDR4 RAM for $50 and you should be good to go! You’re welcome! :smiley:

Oh, and do not forget ten of these, too:

Total cost is roughly $2,5k-$2,7k including drives.

This is actually completely incorrect.

The ASUS Hyper and similar cards depend on available PCIe lanes and bifurcation capabilities of the motherboard.

X570 based mobos can divvy up PCIe lanes from CPU and chipset, but unless expensive PLX chips are added (and to my knowledge no X570 board has that) those are the physical limitations.

Many mobos come with multiple 16x PCIe slots, but those cannot electrically be enabled at the same time beyond the above limits.

X570 boards are limited to 24 (or 28 if you’re generous) PCIe lanes.

When paired with old CPUs, such as the 5600G, these are slower (gen3) lanes and fewer (because the 5600G reserves lanes for the iGPU).

In short - your picks, will lead to disappointment when trying to connect and run with more than 6 NVMe drives.

4 Likes

Sure, depends on if you really need the 4.0 lanes or not. A 5700X might be a better pick, but then you will also need a GPU - meaning, you loose two m.2 drives on this specific motherboard. Maybe you can run headless and bifurb anyway though, via SSH connections.

Do you really need the 4.0 speeds over the 3.0 speeds though? Synthetic benchmarks aside, the difference is negligible in most cases, especially if you plan to fill it with cheap DRAM-less drives.

As for bifurb, you are correct, though I do believe this particular motherboard allows for x4/x4/x4/x4. Would be awesome if someone could confirm this.

2 Likes

The discussion about those 4TB NVME drives is kinda besides the point. If I was gonna use 4TB drives, I might as well just use cheap SATA drives. besides a small speed advantage it´s kinda stupid to go for way more expensive m.2 NVMEs.
The advangage of NVME drives is that at 8TB the u.2/3 NVME drives become cheaper than sata and at 16TB as far as I know there is no sata SSD. So the more important thing would be how to connect 5-6 8TB or 3 16TB drives (if they become cheaper) to my existing hardware without some u.2/3 backplane.

Except SATA drives aren’t cheap, they cost the same per TB as NVMe. M.2 are more expensive because of small form factor. But you can buy less expensive M.2 NVMe equivalent to SATA variants and pay the same @4TB. So if you can get the lanes and connections going, there is little reason for SATA SSDs today. If you want cheap capacity, you go HDDs.

You get SATA today because you have free SATA ports and limited PCIe lanes. Not because of cheaper drives. The cost of NVMe is very much the connectivity and lack of affordable hardware.

It’s a dilemma. Buy outdated legacy hardware or try to use modern tech for a server that runs for many years to come.

If we had affordable boards with more lanes (unlikely) or consumer-grade U.2 with cheap (per TB) 16TB drives (heck, even PCIe 3.0 is lightyears ahead of SATA), it would be way easier to build a Flash Homeserver without bitter decisions and compromises to SATA.

You can do 4x16T enterprise SSD for ~100€/TB (Micron 7450 is 71$+tax todays price in Germany) and get 40TB+ today with basically any modern consumer board. But you can’t realistically expand beyond to any more than 4 drives, 6 if you scrape the barrel. Alternative being 30% cheaper consumer stuff and either be limited on capacity (M.2) or using old stuff (SATA). So either overkill or compromise, not a satisfying situation.

1 Like

Can always go with these instead and have enough lanes:

Use 6 of those Asus Hyper PCIE to NVME cards to have 20 nvme drive spaces and use 2TB drives. That MB also comes with one of those cards so you only need to buy 5 more. And if you wanted, just for good measure use those 8 RAM slots to load up on a crap ton of memory and set it up as an additional read cache for the nvme drive array. You can get 256GB of memory for $550, or buy some used memory on ebay and get 1TB for $2800.

No need to buy Threadripper. Any e.g. EPYC Rome will do this for a fraction of the cost. This is very much not a power-efficient AM4 homeserver anymore, requiring purchase of a lot of new stuff. And it still doesn’t solve the backplane issue.
From a cost perspective, I don’t see this being better if 40-64TB is what you want. You have to buy a new server to be able to get cheaper drives. Scalability is obviously better.

Cheapest SATA 4TB SSD drive according to PC Part Picker: Leven JS600, $175
Cheapest NVME 4TB SSD drive according to PC Part Picker: Crucial P3, $200 (or TeamGroup MP34 at same price)

It’s not “way more expensive”, though admittedly 14% more expensive is still more expensive. m.2 drives are getting more and more cheap, but I do agree 5x 8TB SATA SSD is tempting. We are at the limits of what the SATA interface can cope with though… :slightly_smiling_face:

2 Likes

Except there is no real market for 8 TB SSD. We have a lot of options for 4TB. 8TB bracket only has 1-3 models, all being the worst of the worst SSD quality. So the compromises just continue. But if the worst Flash pool the world has to offer is fine, then that’s at least an option. Or go 12/16 bays with 4TB drives and way better options in every possible way. HBA+Icy dock gets you 16 bays on even 50$ boards, so there is no need for increased capacity in the SATA department, SATA ports are damn cheap compared to NVMe. I’d rather get proper 4TB for SATA and leave higher capacity options to NVMe.