I’ve run out of large slots on a new build, so I’m forced to stick a x1 to x16 riser cable in.
Plugged into it will either be a low end GPU for the host system to use (main gpu hopefully pass-through), or a SAS/SATA HBA running four mechanical hard drives.
Another distant possibility would be Looking Glass? I’m not good at compiling things or troubleshooting “code” … there was a fedora how-to posted years ago on this forum, but the poster was blasted about being off topic and insecure, hasn’t followed up it seems.
1x would work for the GPU as a display out just fine, if you have the PCI lanes to run it (assuming the other slots are all full on your board already and thats why you are doing this).
The HDD should work also, but there is no possibility of throttling or bottle-necking with the GPU.
Board has three phyiscal x16 slots, wired as x8/x8 to the cpu and x4 through the chip-set. There’s also an m.2 wired as x4 to the cpu, which is occupied by a fast nvme drive.
Board has two physical x1 slots wired through the chip-set.
HBA is some generic board I harvested from an old server, 10g card is an Intel x520, intended pass-though video card is a GTX 960. Intended host card is a very old Radeon dont know the model.
Host will run some VMs and be a storage server, running zfs across six rust drives via the HBA. Machine will also be a CAD workstation for electronics engineering and light software development (through the virtual machines) Machine will have an SATA SSD for Windows VM, Linux VM will probably run on the nvme along with say 40gig for the host, a tiny SATA SSD for ZIL, and one open bay for future use. The SSD are all connected directly to the CPU along with two Blu-ray drives.
so pcie 3.0 1x is good for 985MB/s. If you use it for a HBA, its totally fine for four mechanical drives in any config. Should be fine for a GPU too. Personally I would pick the HBA to put in the 1z since with mechanical drives it cant use as much bandwidth anyway.
Your math is right, it’s almost matching exactly the bandwidth PCIe 2.0 x4 provides, though it’ll probably be a tiny bit less due to overhead any anything else the chipset may be servicing at the time
If you are not using looking glass and the host is a server, then the GPU is the obvious choice.
Since it is shared through the chip-set anyway it should not matter as the bottleneck will be there and not at the slot.