Which would bottleneck more?

I’ve run out of large slots on a new build, so I’m forced to stick a x1 to x16 riser cable in.

Plugged into it will either be a low end GPU for the host system to use (main gpu hopefully pass-through), or a SAS/SATA HBA running four mechanical hard drives.

Another distant possibility would be Looking Glass? I’m not good at compiling things or troubleshooting “code” … there was a fedora how-to posted years ago on this forum, but the poster was blasted about being off topic and insecure, hasn’t followed up it seems.

1x would work for the GPU as a display out just fine, if you have the PCI lanes to run it (assuming the other slots are all full on your board already and thats why you are doing this).
The HDD should work also, but there is no possibility of throttling or bottle-necking with the GPU.

Full hardware list and description of your goals might help. Maybe you’re missing an option.

Board has three phyiscal x16 slots, wired as x8/x8 to the cpu and x4 through the chip-set. There’s also an m.2 wired as x4 to the cpu, which is occupied by a fast nvme drive.

Board has two physical x1 slots wired through the chip-set.

HBA is some generic board I harvested from an old server, 10g card is an Intel x520, intended pass-though video card is a GTX 960. Intended host card is a very old Radeon dont know the model.

Host will run some VMs and be a storage server, running zfs across six rust drives via the HBA. Machine will also be a CAD workstation for electronics engineering and light software development (through the virtual machines) Machine will have an SATA SSD for Windows VM, Linux VM will probably run on the nvme along with say 40gig for the host, a tiny SATA SSD for ZIL, and one open bay for future use. The SSD are all connected directly to the CPU along with two Blu-ray drives.

so pcie 3.0 1x is good for 985MB/s. If you use it for a HBA, its totally fine for four mechanical drives in any config. Should be fine for a GPU too. Personally I would pick the HBA to put in the 1z since with mechanical drives it cant use as much bandwidth anyway.

really just flip a coin.

Sorry, I believe they’re all pcie 2.0 off the chip-set, AMD x470 “Promontory” so thats what 450MB ish

Math wize, the HBA looks like it’d be bottlenecked worse, each drive is gonna push 150MB/s sustained and more than that on a cache hit.

The GPU is going to run a basic window manager like xfce

Can you run the X520 in the 4x slot? Because that one is PCIe2 anyway.

Wait, I got it wrong, forget it. xD

yes that’s what I’ll try first…

Nvidia GPU in the first 3.0 slot, HBA in the second 3.0 slot, NIC in the 2.0 x4 slot, 2nd GPU in an x1 slot via x16 riser cable.

2 Likes

That makes the most sense to me as well. :+1:

You should even be able to run a dual 10Gbit NIC in that slot if my math is right.
Just an idea. :stuck_out_tongue:

1 Like

This is what you should run with yes :slight_smile:

Your math is right, it’s almost matching exactly the bandwidth PCIe 2.0 x4 provides, though it’ll probably be a tiny bit less due to overhead any anything else the chipset may be servicing at the time

Thanks for the help all!

This project just got delayed, turns out the chassis I was building into an Enermax chakra takes rails and I can’t find enough to install the drives.

I have an older Antec chassis that will work, need to gut it and move everything over.

If you are not using looking glass and the host is a server, then the GPU is the obvious choice.
Since it is shared through the chip-set anyway it should not matter as the bottleneck will be there and not at the slot.