I’ve got a nice Dell Precision 3660 workstation I’d like to use as a home server. The only major downside with it is that the two PCIe 4.0 x4 slots are close-ended, so I can’t put x8 or x16 cards in them.
I wanted to add an LSI 9500-8i HBA for a SAS enclosure in the 5.25" bay, but that’s a PCIe 4.0x8 card. No dice as I don’t have the equipment or skill to modify the motherboard.
Are there any PCIe 3.0 or PCIe 4.0 x4 HBAs?
The last time I checked into this, affordable PCIe 4.0 HBAs weren’t even on the radar for home built stuff, so I’m hoping maybe the situation has changed.
(Yes, the ideal solution would be to mod the board to have open slots. I’m not capable of that and don’t know anyone in the Dallas, TX area who offers a board customiztion service.)
EDIT: Would it be easier to get a PCIe 4.0x4 Oculink card and wire up an external (or internal) enclosure? I just remembered that might be an option.
I’ve just carefully cut the end open, with a $4 pair like these:
but TheArtOfServer used a knife.
but, with a knife that may slip, I would put firm cardboard down around the socket, in case you may slip and stab the motherboard.
even a cereal packet folded in half, maybe half again, just as a safety net, if you use the knife method.
But, the back end of the socket, has no electronics. It’s just the pins running along the insides.
if you look real close, the end, before it teminates, has a hollow square, so it kind of has built in guides, to have an angles cutter slip in one side, angled towards the back, then the other side, then trim the stalk left over?
Thanks. I’ll take a look at that. Honestly, though, I have arthritis and my hands shake so bad that I’m incredibly wary of making the attempt.
But it sounds immenently doable with good light and a steady hand. All the YouTube videos I looked at were using a dremel, which was not encouraging.
I also appreciate the info about the slot itself. Honestly, modding the slot sounds superior to any other way of doing this that I’ve thought of (PCIe OcuLink card to internal 4x NVME/U.2 enclosure would work, but … money.)
I honestly hadn’t considered risers that change the physical slot, but only because most of them put the slot itself off center of the exterior slot on the case. That said, I know there are quality adapters that won’t burst into flames.
But this would be my least favored option just because it’s awkward and also so easy to accidentally buy junk.
I would rather use a soldering iron, instead of a dremmel, if I wanted to do it slowly. there really is not much material to remove
plenty of padding around so no globs of plastic go the wrong way.
but, dremmel spins fast, and may slip.
soldering iron, heats and melts unintended areas.
cutters, once eventually worked in to position, don’t need hardly any pressure.
but, then again, if I had messed up any boards, I woulda been out less than $300 each, and that’s not the end of the world, considering a 3 minute mod, vs days finding a company who won’t support it.
obviously your mileage varies, and other are welcome to hark up and say “Nay Thar Laddy, Nevar Cut, Always Adapter” or some such…
Well, I don’t own a dremel, a soldering iron, or the knives he was using.
The idea of spending a bunch of money to buy tools I don’t know how to use to attempt a modification that might brick a board I can’t afford to replace is … not economically appealing.
EDIT:@Trooper_ish , how would you attempt the soldering iron way if you did it? Specifically, how would you keep the melted plastic from going everywhere? What temperature on the iron?
(Toxic plastic fumes also seem like a potential issue, but I would imagine this takes less than a couple of minutes if you do it right. There’s just not that much plastic there…)
ASM1166 is 3x2, so readily available on x4 cards. Inexpensive too, but won’t help any with SAS. Otherwise I think you might have to go all the way back to LSI 9211-4i or similar.
I don’t know of SAS options, though that’s not saying a whole lot. Oculink to NVMe enclosure is fairly straightforward but expensive.
While there are plenty of junk flexible risers out there, a rigid one like this ought to work just fine. I have yet to come across one that didn’t.
If you are concerned about it sitting too high, you may be able to find one that is exactly the right height such that a half height PCIe HBA will sit nicely on top of it and screw into the port.
Though, in the past for cards that don’t require access from the back due to ports being there, I have used risers like this in combination with motherboard standoffs and a couple of small washers on top of the PCI slot where the screw goes in (to get the spacing right) to firmly screw full height cards into place (provided the case has space to do this)
I actually have one of these sitting around unused.
Assuming I went with enterprise SATA SSDs, would the LSI 9211-4i still be a better option in spite of its age? It has a max bandwidth of 2 GB/s, which seems reasonable for 4 SATA SSDs at ~0.5 GB/s each.
The 10GTek SATA controller runs at full PCIe 3.0x4 (4 GB/s), but would that really make a difference for 4x SATA SSD disks? I’m seriously asking: 4x SATA SSDs seems right at the edge of what the LSI 9211-4i can do, not counting overhead.
On the other hand, It would definitely make a difference if I got one of the 6x SATA 5.25" enclosure bays. I’d be limited to the thinner SATA enterprise SSDs, but those are rock solid. SAS is not a requirement for home server stuff.
Is this perhaps an erroneous conflation of an ASM1166’s 3x2 link (~1.7 GB/s) with an x4 card’s mechanical form factor? 10GTek doesn’t make controllers. And their SATA only stuff’s so important to them it’s not on their website.
Practical overhead means 2x4 and 3x4 links are going to be more like 1.7 and 3.5 GB/s, not 2 and 4 GB/s. Beyond that you have to look at your workloads’ abilities, the drive and RAID topologies of interest, and the amount of data being moved to assess impact.
In general, there’s not much besides scrubs and multithreaded copies to NVMe that’ll saturate 1.7 GB/s for long enough to matter. But a lot of us here are storage enthusiasts who focus on details that are typically well into diminishing returns.
I think it’s mostly 9211 saves buying hardware and has SAS2, 1166 saves power and gives a couple more ports.
If you’re concerned for simultaneous full rate a default config’d be an ASM1166 in each x4 slot with three drives on each. Startup may be a bit slow, though. A 1166 presents 32 logical ports even when not followed by port multipliers.
Ah. I missed that it’s a PCIe 3.0 x2 electrical card. Thanks for pointing that out.
Yeah, I think I’ll stick with the 9211, especially since I’ve already got it and the cables I need.
I’m not concerned with trying to get full speed from all 4 drives at once. Realistically, as you noted, that’s rarely if ever a real bottleneck in most storage workloads.