[solved] bind nvme device to vfio-pci

Hello everyone,

Solution was a missing bios setting O.o
But aperrently you can pass normal nvme.
If you bind it to vfio-pci it wont show up in linux. (which may be a desired behavior )

my modprobe.d vfio.conf looks like:

#Radeon RX 7900 XT/7900 XTX
alias pci:v00001002d0000744Csv00001DA2sd0000E471bc03sc00i00 vfio-pci
#Radeon RX 7900 XT/7900 XTX Audio device
alias pci:v00001002d0000AB30sv00001002sd0000AB30bc04sc03i00 vfio-pci
#Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961/SM963
alias pci:v0000144Dd0000A804sv0000144Dsd0000A801bc01sc08i02 vfio-pci

#ids=GPU,GPUAudio,SSD,USB
options vfio-pci ids=1002:744c,1002:ab30,144d:a804

with kind regards

Couldn’t you simply assign it to a VM without vfio and start it up? Just make sure its filesystem isn’t mounted. Most PCIe devices are pretty tolerant if they get reassigned to a VM - with the exception of GPUs.

I think the device gets snatched by the host before vfio starts up. There are ways to prevent this. You could configure the kernel image to load vfio first, or you could blacklist / disable nvme support - but this would be only recommended if you don’t want to use any nvme storage on the host.

The arch wiki has a section 3.5 about loading vfio early, it might apply to other systems as well:
https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF

What system are you using?

1 Like

ohh so the whole vfio assignment is only necessary for gpus and you don’t need it for usb controllers etc ?
that is good to know ^^

Yeah I don’t bother with vfio at all, but I am running a Hypervisor. I don’t even use vfio for GPUs as I simply blacklisted all the dGPU drivers. But vfio is the cleaner approach, because it prevents the host to load any other drivers for this device if the card is available again. (for example if you shut down the VM)

My problem got solved after i noticed i did many steps before the 1st one :'D
after i enabled svm (yes i already had iommu etc set up xD) my ssd also got bound to vfio.
In another thread someone mentioned it is probably not needed to have the ssd bound to vfio-pic but it is also useful so it does not show up in the host system anymore :smiley:

Which hypervisor are you using?

Proxmox, it’s basically Debian, zfs, kvm and qemu with a nice web interface.

ohh
i use proxmox for my homelab with a home assistant vm; a truenas vm and some lxcs

Do you use proxmox as your desktop?

Yes I do use Proxmox on the Desktop. In the past I had various issues with GPU passthrough and performance on various Desktop Linux distros and became feed up with continued troubleshooting of the host and the possibility of fixing broken features after updates, which impacted virtualization.

Proxmox worked for me out of the box. It works like a standard PC. Switching it on, it auto starts the VMs I use. Press the power switch, it shuts down. I have a second PC which is stuffed with enterprise hdds which I use as ZFS storage. Because electricity isn’t cheap where I life, it’s great I don’t have to jump through hoops to start or shutdown a local machine.

That sounds like i nice solution (:
I saw folks on reddit doing this with unraid.

before the AM5 Upgrade i thought about merging my Home server and my Desktop.
So that i build a bigger home server with a Linux Thin Client and a Windows/Gaming VM.

But i use a 2 4k 120hz HDR Displays and a 3rd FullHD display.
The cable management is a nightmare since one of the 4k Displays is my Living room TV.

Also the Troubleshooting of the Host when something happens is hard if i only have a Smartphone on had :confused:

Thats why i refrained from using just one Box for all my Applications :confused: But for the future the idea is not completely out of the window :smiley:

Also finding a motherboard with enough PCI Lanes for a 7900xtx and LSI Controller and a GPU for Linux as well as a 4Port NIC is a nightmare :'D

Guess that would be epyc and i dont wanna know what that would cost. I guess its cheaper to run 2 systems :stuck_out_tongue:

I don’t think you need Threadripper for that. A machine with iGPU and the proper PCIe configuration and layout would be fine. And all AM5 processors have a iGPU nowadays to at least light up the screens.

But the DDR5 boards are expensive from both camps. I have no problems „wasting“ 140 euros on a wrong purchase, but I‘ll be crying if I spend 350 - 500 only to find out, that it has borked iommu groups.

My problem are the additional USB controller and a planned 10GB NIC, that I have put „somewhere“, so the most „have only two 16x PCIe slots“ consumer boards are not suitable.

I am eying the ASUS W680 Pro WS W680-ACE|Motherboards|ASUS Global

I have two machines, because most likley I toasted my GPU (3080ti) in the Fractal Design R4 last year - it was just not well vented, the aging 750W powersupply was properly the second catalyst or the main cause for its death, having to supply 2 GPUs, a bunch of SATA HDDs and SSDs and the USB periphery at the same time. I am rocking a Seasonic 1000W PSU and a Torrent case nowadays. The second PC houses my backups and my data graveyard and I moved my spinning rust there.

I need at least a by 4 electrical for my NICs a by 8 electrical for my LSI Controller and by 16 electrical for my gaming gpu.
I did not finde any consumer part with that many pcie lanes :frowning:

And no main board able to fit all 3 cards (GPU is 3.7 Slots). And even than my linux or host wont have display out. So either Threadripper/Epyc with IPMI and more PCIe Lanes or serrate systems xD (at least for me).

Regarding the IOMMU Groups i ordered a X670 Aorus Master after my X370 Hero VI had a broken primary GPU switch for its whole lifetime.

IOMMU Groups are ok and hopefully will get better as the platform matures :smiley:

To have spinning drives in a separate box is nice. Friend of mine has that too. Only powers on every other week to pull backups. Sadly server is running 24/7 with ~100W power draw :confused:

But i cant afford an all ssd nas solution with a hdd cold storage (at least not with this AM5 RDNA3 upgrade i just made xD)

MSI x670e ACE is IMO the best AM5 board for VFIO, issue with this board, it has USB 3.2 alt Displayport only and I dont no if this works with my LG 48" and Linux at 120Hz 4K

yeah sorry i have such a long list of problems with MSI in the past that i rather stop using PCs than using MSI :'D

yes so everyone has his bad experiences, all manufacturers have a bad series

1 Like