EPYC build - when to get SAS controller on MOBO?

Hey Kids,

When would you get a MOBO with a built-in SAS Controller? Two examples of Super Micro MOBOs with and without:

H12SSL-CT - with controller

H12SSL-NT - without controller

Ubuntu host OS with VMs.

Captain obvious here, “if you plan to be using it”.

Most folks are doing software raid these days usually zfs or btrfs on smaller systems (<5 or <10 disks) or they do scale out storage with jbod with ceph for example. The typical exception are on prem bare metal windows server deployments, and 365 subscription is taking over that segment.

Chances are if you don’t need it on day 1, you probably won’t need it later.

Do you have pics in the outfit? I’ve got some visuals in mind but they may differ from actual! :joy:

Ayright… so the NT version of the board is peachy then.

So question then @risk - what kind of throughput can you get on a Supermicro Slimline SAS x8 (LA) to 8x SATA cable? Assume you did a striped RAID using fast HDDs. Does the speed scale linear for all 8 drives on the one cable?

You should be able to saturate the disks.

The cable has tons of conductors inside… (Wikipedia says 74pins). That’s 9 pins per disk, that’s potentially a ton of bandwidth over 30cm distance.

SATA 3 is only 6Gbps and most mechanical drives can barely do 300MB/s sequential anyway (half). 1 lane of PCIe 4.0 is 16Gbps (2.5x SATA3 bandwidth per lane). If you plan to fill it with SATA SSDs, I’d expect you’d be saturating SATA3 successfully at full bandwidth.

So, looking at the NT board documentation… Each bank of 8 SATA ports goes straight into SOC and can be used as 2x NVMe PCIe x4 4.0 devices instead.

I guess if you’re going for mechanical drives, it might be wasteful to use up so many lanes on disks.

Ok - I’ll have a close look at the block diagram. Maybe 8 Optane P5800X drives? :money_mouth_face:

I see what you mean…

You could go with 8 HDD on SATA 8-15 if you wanted to use the M.2 slots and avoid SATA 0-7 connector that way as I read it? Or just skip M.2 and go Optane in a PCIe slot? teehee… it is a good idea…

For mechanical drives, 8 disks at 300MB/s = 2.4 GB/s ~= 1.5 PCIe4.0 lanes worth of capacity. They’re using 8 lanes on NT boards. (or 16lanes for 16 disks / 4.8 GB/s ~= 3 lanes worth of bandwidth ).

On the CT board, they’re using 8 lanes for a PCIe 3.0 SAS controller (40Gbps), but SAS lets you use expanders and you can theoretically attach infinite number of disks to a single port. Since you can daisy chain them.
In practice, you’d run out of physical space in a box/rack before that.

Also, if your investing a ton in mechanical drives for a file server and if you care about aggregate bandwidth of each controller box, you typically soon start thinking about persistent memory and if you can use nvdimms in some ram slots, which you could configure as your large write buffer to batch up writes and minimizing HDD seek time per byte written - thus maximizing throughput.

Typically for general purpose VMs you don’t need that much storage, and you’d want to use nvme anyway for a performant system… and since mechanical disks are 16-18-20T now. A couple of them goes a long way for your typical cold bulk storage.

You only need SAS and raid controllers and that kind of thing if you’re running windows on bare metal because windows sucks at software raid.

If you’re building a ~100,000 disks ceph cluster (10-20 racks), maybe you might be considering the CT board because the onboard controller (40Gbps) might be cheaper to get bundled in, but you’d definitely be adding more of your own controllers in order to maximize rack density and ROI on the space and cost of motherboard/CPU used per byte worth of storage.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.