got a great deal on them + a SAS3-213A backplane and while that is a sas 3 backplane i think sata ssds would be fine even at sas2 speeds right?
looking at the LSI 9300-16i however people have said it gets pretty toasty as its just an 8i + expander
so i was now also considering the 9305-16i
Currently using skylake system - 6700 and asus z170 WS so it has plenty of pcie lanes but all at gen 3 speeds due to its plx chip
“4 x PCI Express 3.0/2.0 x16 slots (single at x16 , dual at x16/x16 mode, triple at x16/x8/x8 mode or quad x8/x8/x8/x8 mode)
1 x PCI Express 3.0/2.0 x4 slot (max. at x4 mode, compatible with PCIe x1, x2 and x4 devices)”
current layout looks to be
8x - gpu for hw transcoding
8x - pcie ssd 1
8x - HBA - would 8 lanes of pcie 3.0 be okay for the hba?
8x - pcie ssd 2
theres also a 4x slot for either asus’s thunderbolt add in card or possibly a higher speed NIC
Could move to AM4 as i have a 3600 laying around from a previous build too if say moving to a gen 4 HBA might work more and then using m.2 to pcie adapters for the pcie ssds?
If you are worried about the HBA’s temperature, you can run a fan pointed directly at it.
As for pcie lanes… Um, I am not sure, you actually get the actual number of lanes you are expecting, from that CPU.
The mobo has lots of sockets, and you can set like 16x/8x/8x in bios… But if the CPU only has 16 lanes active, then real world usage, would be 8x/4x/4x…
But, that is still probably fine for pretty much most uses, and would still allow for you to do a bunch of stuff
I wouldn’t sweat it, and just get that card, if it fits the budget
4 x PCI Express 3.0/2.0 x16 slots (single at x16 , dual at x16/x16 mode, triple at x16/x8/x8 mode or quad x8/x8/x8/x8 mode)
1 x PCI Express 3.0/2.0 x4 slot (max. at x4 mode, compatible with PCIe x1, x2 and x4 devices)
so it would go to 8x8x8x8 - wouldnt affect the pcie ssds but would pcie 3.0 x8 affect the hba significantly? i think the gpu given that its just for transcoding should be okay too?
also for AM4 i believe the closest is also asus’s WS line, the X570-ACE giving
3 x8.0 lanes across pcie 16x slots (at gen 4 speeds)
and then x4 and x2 for m.2 slots
Do you have workloads that’d notice the difference between ~7.5 and ~7.7 GB/s? If not, 3x8 is unlikely to be a meaningful constraint.
4x20 from the CPU, yes. Taichis and a few others switch PEG x8+x8, leaving CPU 4x4 for PCIe SSD 1 and chipset lanes for PCIe SSD 2.
Unlike ASMedia’s Promontories, which seem to have only x4 PHYs, X570’s reuse of Matisse IO die components appears to give it a downstream x16 PHY. The other x16 PHY presumably provides the x4 uplink and 4x SATA but I’m not sure about the other eight lanes. Perhaps AMD dropped them in the TSMC N12 to GlobalFoundries 14 nm translation.
Not quite. X570 is a 4x4 uplink, so no way the chipset slot can deliver 4x8 bandwidth. 3x8 is in principle possible if X570 supports speed translation but, unless you can find tests (I pulled a zero) showing that actually happens, I wouldn’t count on more than 3x4.
Asus doesn’t seem to confirm in the manual but, based on lane availability, I’d expect the U.2 to be chipset x4.
Yeah, be careful with chipset and high-bandwidth devices. At least if you value latency when using multiple devices at the same time. I get horrible network latency with my on-board 10GbE NIC if M.2 is stealing all the bandwidth. NIC+SATA+USB+crap together is all fine, but NVMe bandwidth is insane for those chipset ↔ CPU lane bottlenecks.
NVMe hooked up via chipset (with Core and Ryzen alike) is just bad design. Avoid if possible.
Here is a video review of power consumption of all 8 lane SAS gens.
It also points out the bandwidth and IOPS performance achieved with these configurations.
Ultimately, you will have to make a compromise between performance and cost - both capex (card cost) and opex (power consumption, but that’s a minor consideration based on the review).
Yeah, the implicit reliance on single drive workloads’ inability to saturate the uplink isn’t great. But the point of a chipset’s to fan out more IO than the uplink can handle and plenty of workloads can’t saturate a SATA III SSD or even a hard drive, so… ¯\_(ツ)_/¯
Depending on how chipset attached devices will be used DMI 4x8’s worth thinking about. But Intel Z lacks an x8 downstream PHY so I’m unsure it’d be helpful here.
I agree. On my server, I noticed wierd behavior “sometimes”…was hard to track it down, cause very random and not reproducible. Turned out my aggressive ZFS scrub policy made the NVMe read at max bandwidth. Killed all the latency for anything else on the chipset. SATA with HDDs wasn’t noticeable, cause slow ass HDDs anyway, but latency on my X550 10Gbit went to hell and I actually could feel the drop in performance for my iSCSI LUNs, enough to start digging.
For normal desktop everyday usage, this is negligible in actual practice. But we L1T Nerds tend to have some more unusual workloads where these things can matter. And we often like to use desktop platform instead of server boards for cost reasons, so people squeeze everything out of the lanes they got.
I sort of view chipset M.2s being 4x2 or 3x4, along with BIOS ability to set 4x4s to 3x4s, as features for this reason. Inelegant and inefficient throttles though.
Something I’ve noticed with Promontory 21 is chipset 4x4s top out at ~6.5 GB/s with the same drives benching ~7.2 GB/s in CPU M.2s. Not sure that’s some of the uplink being reserved to mitigate the problems you’ve mentioned but I’ve thought of trying to rig up some tests.
That’s my approach as well. And we’ve seen more and more products with either 3x4 or 4x2. It’s nice to have 7GB/s/drive, but do I really need it? is 2 or 4GB/drive just plenty for my needs? I can say yes to that…I’m more the IOPS and latency guy. And non tech savvy people around me couldn’t tell the difference anyway.
I do still think that advertising 5x Gen4 and 1x Gen5 NVMe, each with 4 lanes ,on a board is a bullshit marketing argument bought for wrong reasons.
Back to topic…yeah, 9300 series is a staple. Get some “ZFS TRUENAS UNLOCKED” card of your choosing…SATA for days. And they’re all old Gen3 anyway (unless you buy 9500/9600 expensive Tri-mode stuff). Question is…how important are sequential read/write speeds? And if that’s mainly a server…you’re probably bottlenecked by network or software stack anyway.
How much sequential reads/writes do you want? or are IOPS much more important?
Another option’d be a couple ASM1166s on an 8 SATA X570 board. No x8 HBA or x8+x8 switching needed and, with a dGPU with an M.2 socket, possibly three CPU attached NVMes with minimal mechanical fuss.
Doesn’t look like Maxsun ever made a product from their B580 prototype with two M.2s, unfortunately.