Ran out of PCIe Ports, X399, pci passthrough

Hello all
My aim is to not be using the host PC directly for things but to do my browsing and gaming in VMs.
To that end I’m running a windows VM with the plan to add a Linux VM with CPI pass through to my current ~10 other VMs. Virtualization my windows PC is faster than my old 2x x5690xeons.

I’ve the current card types installed in a gigabyte x399 designare ex:
VM1 GPU
VM1 USB
Host GPU set a primary GPU in BIOS
VM2 GPU
Host storage HBA,

I’m trying to pass through a second USB card using a DELOCK 62788 which converts an NVMe slot via a SFF-8643 connector and cable.
However when the card is present in the 62788 there is no display output. My fans behave normal IE stop; pump only at idle.
I was not smart enough to think of looking at the diagnostic LED but I did find that my VMs which auto start had unclean shutdowns logged after I held in the power switch…

I have one idea as a way of not browsing the web on the host which is to use firefox over SSH from a-n-other VM but I’m not sure if that is as safe and secure as I hope. It does play back you tube though which is a plus.

So I’m after ideas? Could I try and split up the first of the 8x slots in to two 4 slots? and get two different IOMMU?

Do you really need all those VMs on the 1 box with their own gpus? Why not just build another box if you have that much need. You could also consider a gpu that allows multiple hosts to share it.

1 Like

Did you run out of PCIe lanes or something? Maybe the NVMe slot converter is broken? Other then that I’m not really sure, sorry, good luck though. :frowning:

Need? Nope. Its only two VMs with GPUs One shall be Linux and one is Windows.

I live in a hyper converged world so why not.
I’ve a few test VMs my Mail and Name server, a security onion, my terminal server I do Tax and accounting on, file servers, SQL servers and some DNS revolvers that run DNS sink holes.
I don’t want to do something on my host that breaks these.

Cost for the performance is one reason not to get a pro GPU to share.

I’ve used all of the slots for actual cards, although one only uses 4 of an 8 and I have three NVMe slots free, I want to use 2 of these for storage though.

I guess I could have a fault somewhere hardware wise but I kind of expect that the Bios is just freaked out by what was connected.

Thanks

I found on these forums that the 8x slots don’t but the 16x slots do support bifurcation Gigabyte x399 pci-e bifurcation support?

Hi @JEF_UK … do you have the details on your setup posted anywhere? I’m trying to get two GPU passed-through with same motherboard as you, but the 2nd gpu escapes my grasp. What bios version are you on, what distro did you use, etc? Thanks!