Setting up for a VM

OK, this has probably been asked a million times, how do I set up a VM?

I play windows only games BF1, R6:SS and soon BFv etc. I want to come to linux/ubuntu as my only OS. I have a ryzen 5 2600x, and an RX580 both water cooled,on a x370 rog gaming-f mobo with 512gb Nvme and 2 3tb HDDs. How or what more do i need to do to get a windows 10 VM working for my windows only game like BF v etc?

If you are doing a passthrough then you need 2 gpus one for host and one for the guest.

Guides here:



https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

2 Likes

when you say host and guest, I am guessing this means that ubuntu will take say a gt710 and the rx580 will go to windows? Also I am very much a noob at this ubuntu stuff.

Yes the host is your base operatingsystem, and the guest is the virtualmachine

perfect, I am getting the hang of the slang, so because I am using Ryzen I need to use a second discrete GPU, can it be something cheap like a GT710 and have my RX580 as the guest GPU?

1 Like

A GT 710 would work great as the host GPU.

Sweet and from what I understand the new Ryzen 2000 series CPU has IOMMU grouping fixed allowing the GPUs to take seperate groups, is this accurate?

It more depends on the chipset and specific motherboard model. Each motherboard model has to be checked individually, although some chipsets tend to be better than others.

The following script is from this page on the Arch wiki and it will show you your IOMMU groups(if they are enabled)

Ensuring that the groups are valid

The following script should allow you to see how your various PCI devices are mapped to IOMMU groups. If it does not return anything, you either have not enabled IOMMU support properly or your hardware does not support it.

#!/bin/bash
    shopt -s nullglob
    for d in /sys/kernel/iommu_groups/*/devices/*; do 
        n=${d#*/iommu_groups/*}; n=${n%%/*}
        printf 'IOMMU Group %s ' "$n"
        lspci -nns "${d##*/}"
    done;

I have the asus ROG x370 gaming-f so when I do put the host gpu in i put the host GPU in say my lower PCIe slot and leave my more powerful guest in top slot, I then run this script in terminal and hope to see differrent group numbers for each GPU?

Correct.

You have to have enable AMD-Vi in the UEFI and amd_iommu=on in the kernel arguments in grub config.

I think I have already enabled AMDs version of CPU virtualisation but not sure on the grub config

The linked guides further up the thread have lots of useful info including how to enable IOMMU in grub, so does this page-https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

To enable IOMMU in grub on Ubuntu, edit /etc/default/grub, add amd_iommu=on in between the quotes on the line GRUB_CMDLINE_LINUX_DEFAULT=" ". Save the file, then run update-grub and reboot.

Thanks guys, when I get the time in the next week or so I will give it a go.

Ok guys little set back, I need a bigger PSU, got one on the way, while I wait. Ubuntu or Arch linux? I have a few days to learn by the way, just really want to get away from the windows shite

I would recommend Ubuntu if you are a beginner or Fedora. But this is only my opinion.

The benefits of Arch for this specific purpose is that it’s all very well documented in the Arch Wiki.

Also it’s easier to install the ACS patched kernel via the AUR if your mobo doesn’t have good IOMMU isolation.

I would recommend Antergos (Arch based) over Arch for most use cases. It has most of the benefits of Arch, but is much easier to install. And, in my experience, is less prone to broken packages.

currently have Ubuntu, from what I could find most of, if not all the guides are for Arch that is why I ask. I am a beginner though just one that has been on and of thanks to games for the past year. Thanks guys big help, this is the best forum.

I have a 512GB samsung EVO 960 NVMe and a WD 120GB green SATA SSD which OS do I put on what?

Any hits to performance? Day more than 10%? I plan on doing this VM set up this weekend

followed the guide to the tee, my ubuntu desktop is using the AMD card while the windows VM is using the cheap ass 710 that was supposed to be my host card, how do i reverse this?