Has anyone figured out how to use a dual-gpu card for Windows virtualization yet? It'd be interesting to see a R9 295x2 or a Radeon Pro Duo be able to do that.
I will be glad to figure it out if I get my hands on one haha.
I'm currently dual booting Debian Stretch and Windows 7. How easy is it to do the GPU passthrough in Debian as compared to in Fedora?
I drive an Ryzen 5 1600x on an ASUS Crosshair 6 Hero, a XFX - R9 280x and an old XFX - Radeon HD 5770 on Ubuntu 16.04.2. Last weekend I did the BIOS update to Version 13.04 including AGESA 126.96.36.199.
I did only run Wendels test script for the IOMMU Gouping so fare. I will play around a bit with the PCIe passthrough in the next time. Oh... here are the intresting result of the test script:
IOMMU Group 0 00:01.0 ... IOMMU Group 11 03:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b9] (rev 02) IOMMU Group 11 03:00.1 SATA controller : Advanced Micro Devices, Inc. [AMD] Device [1022:43b5] (rev 02) ... IOMMU Group 11 21:00.0 USB controller [0c03]: ASMedia Technology Inc. Device [1b21:1343] IOMMU Group 11 23:00.0 Ethernet controller : Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03) IOMMU Group 12 29:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT [Radeon HD 7970/8970 OEM / R9 280X] [1002:6798] IOMMU Group 12 29:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Tahiti XT HDMI Audio [Radeon HD 7970 Series] [1002:aaa0] IOMMU Group 13 2a:00.0 VGA compatible controller : Advanced Micro Devices, Inc. [AMD/ATI] Juniper XT [Radeon HD 5770] [1002:68b8] IOMMU Group 13 2a:00.1 Audio device : Advanced Micro Devices, Inc. [AMD/ATI] Juniper HDMI Audio [Radeon HD 5700 Series] [1002:aa58] ... IOMMU Group 7 2b:00.2 Encryption controller : Advanced Micro Devices, Inc. [AMD] Device [1022:1456] IOMMU Group 7 2b:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:145c] ... IOMMU Group 8 2c:00.0 Non-Essential Instrumentation : Advanced Micro Devices, Inc. [AMD] Device [1022:1455] IOMMU Group 8 2c:00.2 SATA controller : Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51) IOMMU Group 8 2c:00.3 Audio device : Advanced Micro Devices, Inc. [AMD] Device [1022:1457] IOMMU Group 9 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59) IOMMU Group 9 00:14.3 ISA bridge : Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
I didn't change anything so fare. The grouping is by default. The two Graphics Cards are each in a seperate group what make sence I think. Also I got in total 3 USB Controllers in two different groups: Ryzen SoC Hub and X370 Hub - maybe? There are also two SATA controllers in seperate groups. I think one comes from the Ryzen SoC and the second from the Chipset.
I will play a little bit around to figure out what port is connected to which controller.
PS: Sorry for my bad english
I had some time to play around with the passthrough. Badly I have to say, that the ASUS Crosshair 6 Hero MoBo with its latest BIOS seems to be a little bit buggy...
I tested 3 (Mainline) Kernels from the Ubuntu Team:
* 4.12.0-041200 RC7
Kernel 4.11.0-041100 was tested because VirtualBox 5.0.40 DKMS driver module don't support the 4.12.* Kernels.
Before I started, I checked if IOMMU is enabled in the BIOS and set it from AUTO to enabled.
Then I added Wendels Kernel parameter in the Grub default file as additional boot parameter, rebuilded the Grub Config and issued update-grub.
I tested VirtualBox virtualization first, because i use it very often and it is already installed on my system.
I added my old Radeon HD 5770 to a new VM via virtualbox command line:
VBoxManage modifyvm "PCIexpress Passthrough" --pciattach 2a:[email protected]:00.0
There are some creteria they must fit to use PCI passthrough with virtualBox what you can read here: VirtualBox Manual - 9.6. PCI passthrough
Powering on the VM resulting in a instant crash of my entire System like a overclock failure: Black screen, flashing BIOS debug LED and the Q-Code 8 with the addition of a starting and stopping fan of my HD 5770.
I tried the two 4.12. Kernels next with qemu/kvm and libvirt / virt-manager support. I'm a little bit lazy ... I used virt-manager GUI to setup my VM. It was very easy with it to pass the Radeon HD 5770 to the VM. But starting the VM had the same results like virtualBox: instant black screen, flashing BIOS debug LEDs... and so on...
Next I checked the BIOS and set RAM speed back from 3200 MHz to 2133 MHz to see if makes a difference. Done... Saved ... And now my system goes really nuts! The Crosshair wont post anymore!
The BIOS boot debug LEDs indicated that the board failes to verify the Graphics card(s)! I issued Save Boot Button - no change - no boot. BIOS reset - no change, no post!
Finally I ripped off both graphics cards from the board and started without any graphics... result: POST! Powering off my system, reinstalled my R9 280x card... Powering on ... result: POST!
Thats another bad experience what I have with my Crosshair 6 Hero! I'm in general not happy with this board and I rue my decision to buy it instead of the ASRock x370 Taichi! But thats another topic
If some one has a good idea what I can try ... you'r welcome!
Do this, but on the same system rather than from a server.
You never touch the Windows VM. It auto boots, logs in, and starts Steam. Then you just stream the game directly to the Linux side. Since they're on the same machine, as long as the networking is right, it would have 0 lag besides processing time for the video since it isn't going through ethernet/wifi, just the virtualized networking on the system.
@wendell regarding the Gaming 5. It's currently the board I'm thinking about getting and maybe doing the passthrough thing. If I remember correctly (can't check since YT doesn't work here and there's no MP3) you mentioned passing through the SATA controller as a whole. Was that only referring to the Taichi or also the Gaming 5? As I see it the Gaming 5 doesn't have a separate SATA controller, just the X370. Passing it through would mean I can't use it in the host obviously, which is fine if I'd just be using one drive for windows and wouldn't need a data-drive. It just so happens to be that games are getting ridiculously huge these days and stuffing everything on an SSD doesn't work as well.
So, kind of a bummer. Any way around that? Of course I could use use the data-drive exclusively in Windows, but what if I need some data on Linux too? Is there any way to solve this problem? Installing games on the NAS would be a thing I guess, but... yeah, not gonna do that I think
Is the only solution just using VFIO and hoping for better performance in the future? How big is the performance difference even?
i flashed the agesa update for my mobo last week.
it crashed. waiting on RMA now.
Hey, I have a question about GPU locations. If we assume that all the gaming and other demanding GPU work will be done in the VM then the host OS can likely get away with a relatively weak GPU. Has anyone tried installing the host GPU in the bottom slot on something like the Taichi?
The reason to do that is because if you populate both of the upper slots then both GPUs will be running at 8x because of the lack of PCIe lanes. While the bottom slot is only 4x electrically (and is shared with other stuff), that should still be enough for a relatively undemanding host OS and it would let the more powerful GPU for the VM run at the full 16x.
But I'm not sure if this is even really possible so I wanted to check.
Sure, go for it. Though do keep in mind that anything short of a 1080Ti will not even saturate a PCIe 3.0 x8 slot, so it's less of an issue than it may seem.
Hello, loving the guides and videos specially the PfSense router series.
I’m currently going mano-a-mano with passthrough using the Ryzen article guide and I’m butting up against one issue:
vfio-pci is not being used at all for the 2ndary card… (I have a GTX 560ti and RX480), so whenever I reboot I still see the 2ndary screen and I see with ‘lspci’ that it is still using its amd driver and not vfio-pci)
I have done the following steps:
- Installed Fedora 26 and updated.
- Installed @virtualization tools.
- Modified /etc/default/grub with the line
GRUB_CMDLINE_LINUX=“rd.lvm.lv=fedora_battlestation/root rd.lvm.lv=fedora_battlestation/swap rhgb quiet iommu=1 amd_iommu=on rd.driver.pre=vfio-pci”
- Created /etc/modprobe.d/vfio.conf with the line to use my XFX 480:
options vfio-pci ids=1002:67df,1002:aaf0
- Ran dracut command
- Ran grub2-mkconfig command
The one clue I may have found is a line in ‘journalctl -b’ that states
dracut-pre-udev: modprobe: FATAL: Module vfio-pci not found in directory /lib/modules/4.11.11-300.fc26.x86_64
It certainly looks odd to me since ‘modinfo vfio-pci’ succeeds and ‘modprobe vfio-pci’ returns nothing (as it should be).
Is there something horribly obvious I’m missing? Searching around for my dracut error message and variations on it have not yielded much.
Try updating/creating a vfio.conf in /etc/dracut.conf.d with add_drivers+=“vfio vfio_iommu_type1 vfio_pci” and THEN running dracut -f --kver
Another user reported this and I think maybe my stuff does this for me automagically these days ?
That worked, the 480 now shows vfio! Alas, now my Gnome session crashes when logging in…
Nothing like the present to get some learning on vm-ing it up with the console
I ran into problems in ASRock AB350M Pro4 mb. There is a pcie2.016 and a pcie3.016 slot on board, my videos cards are FX580 + GTX1080. At first I ran into the iommu group problem because pcie3.016 is not in an isolated group. However I found out that pcie2.0 slot is in an isolated group so I swapped the two video cards (GTX1080 in pcie2.016 while FX580 in pcie3.0*16). Now GTX 1080 is in a isolated group but the problem turned into: the system will always use GTX 1080 as primary video card and vfio won’t work on my system. does anyone have thoughts on that? Thanks.
I built out a KVM configuration for my 1700x rig, but the performance has been pretty bad. After getting everything up and running in accordance with the guide, I was getting terrible stuttering, both in interactions with the OS and when running 3d games.
I disabled NPT after reading about the bug with AMD chips and that helped with the constant stuttering, but performance was just not what I had hoped.
Even with settings turned down on Borderlands 2, I was still only getting 40-50 fps, with frequent dips far below even that.
I am running:
AsRock X370 Taichi
Geforce 980ti as main card, 710 as secondary
I built my deployment on top of Manjaro, and performance within the Linux OS is great if I turn off VFIO. Has anyone else run into this? How do we squeeze the most power out of this sort of configuration with NPT disabled? What levels of performance are being seen, and considered acceptable?
Right now the xen hypervisor is working much better but core pinning helps a lot. I/o is the enxt major bottleneck
I used the pinning suggestions outlined in your article, but it still seemed pretty abysmal.
I tried building Xen, but the hitch there is that Xen can’t mask itself from the guest OS like KVM does. There is a driver patcher out there that supposedly corrects this, but even with those, I could never get the error 43 errors to disappear from within Windows.
It seems like AMD + Nvidia is a terrible combination to have right now.
Hi! I have never used KVM virtualization before, only VMWare, Xen and Hiper-V. I was intrigued by Linus’s 7 Gamers 1 CPU Idea of having one machine with Unraid taking care of storage while playing games on windows VMs. And I was wondering about having something similar to that but using a ZFS based OS as the storage. I was thinking of using Oracle Linux with ZFS and KVM to do it. But since I was planing on buying an AMD R7 1800X and that same AsRock X370 Thaiti (the cheapest available with on-board WiFi for me) I stumbled across this guide while researching. Do you think I should wait and see or go for it. What OS should I try doing this? how does ZFS perform on Fedora?
Zfs is great on fedora.
I am really looking forward to the promised performance comparison to bare metal. I know there’s a ton of other topics to focus on right now, but my fingers are crossed, hoping for some benchmarks.