Anyone got this working on a Gigabyte AB350-Gaming 3?
B350 boards are generally a bit tricky to do this due to PCIe lane layout limitations (they can’t be split). So unless that board can use the 4 NVMe lanes on a PCIe slot this is going to be real hard.
I cannot seem to get this working. Please forgive me as I am new to VFIO.
My windows ssd is on /dev/sdb with the efi boot partition on /dev/sdb2. I figure I point virt-manager at /dev/sdb as a whole. Do I need to give libvirt/qemu permissions on /dev/sdb?
I am using a Threadripper with an Asus ROG Zenith motherboard, so I realize I could have a plethora of issues. I saw the Threadripper patch thread and Zenith Specific thread. I applied the patch (tried the 4.16 but had issues so went back to 4.15 and fedora 27).
I am also using a rx Vega 56 and Vega 64, which have the same device and vendor IDs. I am using the ‘pass both to vfio-pci, but catch one of them by bus-id’ strategy, which seems to work, as I have two montior, one connected to each card, and while both used to go to fedora now only one does.
Unlike in the Zenith Specific thread, I cannot even get the windows VM to post.
Any thoughts from someone who may have encountered similar?
Only if you’re not running it as root. If you’re using libvirt, it will probably sort out the permissions on it’s own. (At least, it did when I used it)
Have you tried just detaching
/dev/vdb to see if it’s a problem with the disk permissions? you should be able to see the OVMF post screen if you do that.
I’ve encountered similar and I don’t really know what’s caused it. I usually just spend a few minutes recajiggering everything and it works.
Have you verified that the GPU you’re passing through is actually getting
vfio-pci attached and that it’s the only thing in the IOMMU group? (I’m just trying to catch the general stuff because I don’t have TR on hand, I just know a fair bit about passthrough)
any idea which x370 board to get for passthrough? about to redo my ryzen system and want to know some known working boards.
ANY HELP someone could provide would so appreciated… Maybe someone here has seen this before… I set up a system for a buddy, Ryzen 1950 on an ASRock X399 TAICHI sTR4 and installed Fedora on a 256GB NVME and Windows 10 on a 500GB NVME. I have the latest Kernel and had the kvm working flawlessly with a GPU passthrough. I sat here and used the VM running Win10 a multitude of times. Making sure it worked. He took the computer home and it stopped working.
This is the error that pops up when it’s run… (Error text below image)
ANY help I could get on this would be incredibly appreciated… I’m giving myself headaches trying to research this error, and getting nowhere… THANK YOU!!!
Error starting domain: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainCreate)
Traceback (most recent call last):
File “/usr/share/virt-manager/virtManager/asyncjob.py”, line 89, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File “/usr/share/virt-manager/virtManager/asyncjob.py”, line 125, in tmpcb
File “/usr/share/virt-manager/virtManager/libvirtobject.py”, line 82, in newfn
ret = fn(self, *args, **kwargs)
File “/usr/share/virt-manager/virtManager/domain.py”, line 1505, in startup
File “/usr/lib64/python2.7/site-packages/libvirt.py”, line 1062, in create
if ret == -1: raise libvirtError (‘virDomainCreate() failed’, dom=self)
libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainCreate)
Here’s the config file for the kvm…
I finally got it working! (kind of) Thank you!
I have an ubuntu vm working with a vega 64 passed to it, with a vega 56 on the host. Your last comment about vfio-pci really helped me, turned out that was the problem. Thanks!
Glad I could help.
Getting vfio-pci attached is tricky sometimes.
can i do this passthrough with i7 6700k and 1070gtx desktop? I just swith to qubes. Would love some help
You probably can, but I have no experience with qubes, so I won’t be able to specifically help you there.
You can with qubes but the qubes people philosophy is that this kind of thing is not secure so"shouldn’t"
@john54 did yo usucceed it?
Did someone succeed recently to do the same on ubuntu 18.04 server ?
I would like to try with my test server on a 1800X ryzen + asrock x370 pro gaming . At least first for the gpu part then in a second time for the ssd part.
What I can see out of the box without any changes is that,
1/ iommu seems to be activated by default since I can see the grouping without any adding lines to the grub, so is it any use to add the lines in that case?
2/ the vfio drivers are also already loaded by default, so again is it any use to the added lines?
3/ if one needs to install dracut, there will be some removing of the default tools, is it possible or needed to do the same thing than on fedora with intram or something like that or is it a automated thing that debian/ubuntu do on itself?
has qubes implemented gpu passthrough yet? I thought their xen host wasn’t capable of that yet
There are patches to do it but they are frowned upon for ideological reasons.
ah. Hadn’t really looked into it because of the first shooing-away I got.
I am tempted to fund a fork. They aren’t wrong it’s a security risk but the risk will be mostly mitigatable on TR2 and soon on TR/Epyc with a patch I’m working on.
I’d be interested in seeing that, qubes does a lot right, but their insular community does have some weird hangups.
My big thing is Fedora Kernels build in the EHCI and XHCI modules into the kernel, so vfio-pci isn’t early enough to grab USB controllers. It has to be done with the pci-stub module. Only way I see this resolved is if vfio-pci is a kernel module, or using a fully vanilla kernel with Fedora.
I don’t have a problem with it. I just pass it through anyways, and when libvirt starts the VM, it rebinds the driver, connects the controller, and I’m off to the races.
usb3 does actually behave differently than usb2 in my experience, it entirely depends on the controller since no one can seem to agree on implementation.