I am currently building a server for a small business and I have several questions about how HBAs work, specifically the broadcom 9400-16i.
The server itself will host a couple of Windows Server Vms with some databases.
The drives I am using with the server are the following :
8 * Samsung SAS drives Pm1643a
4 * 10 Tb Ironwolf pro
2 * Kingstone data DC600M 480 Gb as boot drives ( Proxmox)
So the HBA card connect to 8x PCIE gen 3.0 wired slot which should equate to 8 GB/s throughput ( 1 GB/s per lane * 8). It Has 4 mini sas port that can break out to 4 sff 8643 . I ordered the card but I still haven’t received yet. Assuming that these drives will setup as ZFS Pools, My questions are :
Should I plug all the SSD SAS drives in separate port the HBA card ( Port 1 & 2 : 8* SAS SSD drive, port 3 : 4* sata HDD, and port 4 : 2 SATA SSD ? If I do, is my total thoughput be bottlnecked ? Am I correct to assume that I will get 2 GB per port or Is the card capable of dynamically allocating pcie speed to the ports ?
Or should I mix the drives into different ports in order to distribute the capabilities of each type of drives ? For example :
Port 1 : 2 SAS drive and 1 SATA HDD
Port 2 : 2 SAS drive and 1 SATA HDD
Port 3 : 2 SAS drive and 1 SATA HDD and 1 SATA SSD
Port 4 : Port 3 : 2 SAS drive and 1 SATA HDD and 1 SATA SSD
More broadly , if the card is connected to an 8 lane pcie slot ? How the hell can it support up to 1024 Drives ? If I were to put a maximum load on all of my drives , it should need around 12 GB/s ( 1.28 + 0.52+0.250*4) let alone if I were using 1024 Drives. Does it mean it’s going to be limited to 8 GB/s ?
-Finally, not that it matters to me a lot but the Samsung Pm 1643a has a rated read speed of 2100 MB/s but it uses SAS 12GBPS . Not considering the limitations of the HBAs, how is it supposed to reach the rated speeds with an interface that is not capable of that ?
This is the first time I am using SAS drives and HBAs, I am only familiar with simple sata drives.
Link speed is not indicative of actual performace. Link speed is just a limitation you need to watch out for.
A gigatransfer is not always a gigabyte. Thats arctitechture/platform dependant. Its just as hardware-variant as measuring a file size by blocks.
If you are expecting to perform close to the link speed, then you need a faster link.
This thread is now about how much headroom is too little.
My 10gbps network constantly uses around 7gbps. I am considering upgrading to 40gbps for that reason. Am i just being paranoid? I like my current rj45 cat8 routing, but it might be smarter for me to switch to fiber and get 100gig.
100g over rj45 exist yet?
First of all good on you for picking SAS SSDs over NVMe SSDs; the low prices and hype around NVMe lead many to only consider them at the cost of manageability, expandability and reliability.
This is how I would configure it, however modern HBAs should be able to negotiate speed per lane, not per 4-lane “port”. The last time I remember entire ports negotiating to the slowest device was SCSI, but its possible some of the early SAS cards had similar behavior a couple decades ago.
The card is capable of dynamically allocating bandwidth to the ports that need it, there is not a hard cap at 2GB/s per 4-lane grouping of storage connectors.
This would only be possible using SAS expanders with SAS drives (some SAS expanders support SATA drives but don’t count that as a given).
That 2100MB/s speed is the speed of the SAS drive in dual port mode; basically it takes two SAS-3 lanes and teams them up to reach a hypothetical 24Gb/s. Most SAS drives have 2 lanes on them, but the most typical configuration is a dual port failover mode where you have redundant links to the same drive.
Additionally, it’s hard to find information on what supports dual port failover vs allowing for dual port utilization. Best to assume that it doesn’t unless it’s on the newer side