Does anything display at all on the card to be passed through?
No, nothing displays on the V64. Removing the gpu out of the devices causes the vm to boot normally.
Adding a win10 boot iso to see if that will boot, nothing will show on gpu. Adding a spice device for some sort of output shows that the iso bluescreens when trying to boot. When trying to boot to already installed win10 drive, the dot circle will turn two times and freeze.
Also, the secondary card is in the last slot of my Asus Prime x370-Pro. There are three x16 sized slots, 1 x16, 1 x8, and I think the last is 1 x4. Really need to lookup if that last slot does the same, but right now I want to try getting the first slot working.
Do remember that that last slot is PCIe 2.0 x4, so from a bandwidth perspective that’s PCIe 3.0 x2. Some cards refuse to boot from that amount of bandwidth.
Why not just use the x8/x8 configuration though? Current gen GPUs don’t have enough bandwidth to be throttled by that anyway.
That slot also shares bandwidth with both the x1-slots, so if you have any of those populated you will only get 2 PCIe 2.0 lanes, equalling 1 PCIe 3.0 lane in bandwidth.
That slot is being used by “VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series]”
for the Ubuntu install. I do plan on getting a more beefy card for Ubuntu, and all this trashing around would be for naught, but would like to figure out how to do it anyways. It is purely for giving output, which it does on that slot. Thats all I need from it.
Am not using any other PCIe cards.
I wish to figure out how to get it to work because I will be making a computer for the parents that will be two vms instead of one. The ubuntu install will be headless, while they will only see the win10 vms. I will most likely get two RX 560s to do this. I can passthrough anything but whatever is in the first slot.
Decided to update to kernel 4.17.9 from 4.17.8 to see if anything changes, and I see this during the install:
W: Possible missing firmware /lib/firmware/amdgpu/vega12_gpu_info.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_asd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_sos.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_rlc.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_mec2.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_mec.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_me.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_pfp.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_ce.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_sdma1.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_sdma.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_uvd.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_vce.bin for module amdgpu
W: Possible missing firmware /lib/firmware/amdgpu/vega12_smc.bin for module amdgpu
Ok, interesting note, I removed the V64 and added the spice gpu to boot the win10 vm. I changed the boot of the vm to safe mode networking with base graphics, shutdown and readded the V64. The vm then boots into safe mode and shows the V64 card in device manager. So Im wondering if it is an issue with win10 drivers instead of kvm? If that is the case, why wouldnt it display anything on the V64?
Ok, so uninstalling the AMD Radeon drivers in win10 allows the vm to boot with the V64 attached, but now I am getting this in the device manager:
Windows has stopped this device because it has reported problems. (Code 43)
Isnt this something that only affects Nvidia cards?
Trying to reinstall the V64 drivers causes the vm to hard lock.
Quick question. Is the guide any different if I am using an RX 560 for Linux and a GTX 780 Ti for Windows? I’m just not sure about the part of the guide where you are modifying module files and initramfs.
No, the only thing different is that you must use the ids displayed by your 780 ti for passthrough.
ok, cool. The parts where it mentioned AMDGPU threw me off and I wasn’t sure.
Kinda related followup question, I assume it doesn’t really matter what slots I use? On my Crosshair VII I was thinking about putting the GTX 780 Ti in the top slot and RX 560 in the middle slot. So if I get a 2nd M.2 drive, I can run the two graphics cards and two SSDs off the CPU with the RX 560 running at 4x because I probably won’t need the bandwidth if all I am doing is browsing the internet in Linux.
Alright, I’m trying to go through this and i’m at the step where you enter the device ID’s. I want to double check that the guide is EXACTLY the same as for an Nvidia GPU passthrough aside from changing the device ids. The guide above mentions amdgpu when editing initramfs and the module files. Is this supposed to be amdgpu or nouveau?
In theory, it shouldnt matter what slot you use so long as it has the highest PCIe version available (PCIe3 vs PCIe2 for example), and that it has enough lanes to keep the GPU from bottlenecking.
You will want to use nouveau instead of amdgpu if you are passing through the 780 ti. If you are using the first slot, you might run into issues getting it to work like I am currently. Im sure there are better *nix guys here who can explain why this happens, but so far I have not been able to do it myself.
I ended up just swapping the cards and hoping for the best.
I will probably ask in another thread about the specific pcie config on my Crosshair VII Hero. I think the M.2 configuration is different from what I expected causing the 780 Ti to run at 4x which will bottleneck it.
Thanks for the tutorial. I’ve successfully done the passthrough with one RX480 but i was wondering if it was possible to passthrough two gpu to two diferent VM.
One RX480 and one GTX660 i tried but i can’t get any outpout with one VM with the 660 and the other offline. As far as i’ve tested the RX480 can work fine alone plus i can’t start a second vm with the second card/
tried to passthrough old Nvidia 450, managed to get to the point, where windows would know its there, but once i installed driver, it would go BSOD every boot and i had to use Spice to even get any kind of video out of it.
I was testing it on Elementary OS tho, which turned out to have pretty old versions of everything, which may be the issue.
So i will atempt to do it on Ubuntu 18.04 next and see how it works.
Also Wendell gave me some rights, so i can update original tutorial with my findings and some steps could have a bit more “how to” (for newbies like me), had to do some side-googling as well or may start completelly new thread, will see
You need to hide the hypervisor for that (just look on the forum for Code 43 )
ye i know, but i had to update qemu and so, so it would even let me save that config, otherwise it would complain about it not being compatible and wouldnt let me save the config… then i guess when i had it windows started BSODing and if i remember correctly i rebooted host and it broke qemu completelly sooo
Hello, first time posting… I apologize for any perceived abundance of ignorance
I do not have any AMD hardware in my laptop and, well… I’m just curious if there is an intel/nvidia specific version of the following portion(s) of the guide. Or is “amdgpu” sort of universal to generic linux configs?
(I will list my specs and what ls-iommu.sh returned below the quote)
Laptop: Acer Aspire V15 Nitro - Black Edition
CPU: Intel Core i7-4710HQ
GPU: NVIDIA GeForce GTX 860M (2GB GDDR5, GM107)
Display: 15.6”, Full HD (1920 x 1080)
HDD/SSD: 128GB SSD + 1TB HDD
Original: Windows 10
Current: Ubuntu 18.10 (as of 2018-07-31)
IOMMU Group 1 00:01.0 PCI bridge : Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
IOMMU Group 1 01:00.0 3D controller : NVIDIA Corporation GM107M [GeForce GTX 860M] [10de:1392] (rev a2)
IOMMU Group 2 00:02.0 VGA compatible controller : Intel Corporation 4th Gen Core Processor Integrated Graphics Controller [8086:0416] (rev 06)
IOMMU Group 3 00:03.0 Audio device : Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller [8086:0c0c] (rev 06)
amdgpu is the kernel module of the driver for current AMD GPUs.
nvidia would be the equivalent for the proprietary Nvidia drivers.
nonuveau would be the equivalent for open-source Nvidia drivers and finally
radeon is from the old proprietary cards.
Use whichever the your card is using. You can find out with
Laptops in general do not work well or at all with graphics pasthrough. See the info in this article- https://gist.github.com/Misairu-G/616f7b2756c488148b7309addc940b28
Just my experience, the nvidia proprietary drivers break passthrough for their cards. Every time I tried driver 390 and passthroughed it it would BSOD the VM. Try to just use the nonuveau drivers dont install the nvidia proprietary drivers.
will try all this on Ubuntu 18.04 next. Elementary seems to have older packages for some stuff which may be the issue here