Return to Level1Techs.com

New to both Linux *and* VM basic questions

Hi 1st off i’d like to show appreciation for this forum as i admit reading up on stuff here a bit before finally joining up today ^^ I’ve only started educating myself on Linux Dec 2019 (still doing it!) and i’ve tried several distros both Debian and Arch based and settled for Arch with either Manjaro or Endeavour with Deepin DE as linux of choice in my next rig overhaul yay

My last rig was from 2009 and is a AMD X3 710/8GB/880G chipset with HD 4250 IGP. OS is W7-64 Ultimate. Over the years i’ve double RAM from 4 to 8 and swapped out a mechanical for SSD. I admit being really tempted to nab a 1090T on the cheap in the used market but with Ryzen i am happy i waited it out haha

Anyway i would like my next rig to run Manjaro or Endeavour as host and W10 as guest VM? No gaming or really light gaming (old titles) would be done on the W10 Guest. I’ve researched a bit and understand that for a VM to run inside a full blown OS (Arch based linux i my case) it is known as Type 2 Hypervisor am i correct? Can Type 2 HVs pass through GPU and other stuff to a Windows guest effectively? Is it very hard for linux/VM newbies like myself?

I’ve tried Virtualbox VM in Linux Mint 19.3 on my current machine but the 880G chipset has no IOMMU so no GPU pass through was possible. Even so the W10 Guest ran rather ok (no gaming) My questions are

#1 With a Ryzen non APU would it be cool if i slot in a 8400GS for host and pass through a HD 5450 to W10 Guest? Remember no gaming and i already have these 2 cards on hand and i like it that both cards are passively cooled and silent ^^ My research also points that for pass through GPUs AMD is easier than Nvidia currently? Which card ought to be assigned to a PCIex4 slot if thats the only slot for a 2nd GPU on a mobo? Planning an mATX build

#2 How is audio gonna work? In my previous experiment its shared but how does it work when a GPU is passed through? Planning to use a DAC with USB/optical/coax inputs. Would shared audio work or do i have to pass through say a coax/optical for Guest while using USB audio for Host?

#3 How is storage going to work? My plan is to install both Linux and W10 guest on a 256 GB NVME drive and 1-2 mechanical for storage? Currently i have only 1 mechanical 2 TB but maybe i could add another 2-4 TB drive for storage if performance is much better with SATA pass through for Guest? Can we pass SATA port/s or have to slot an addon card and pass the card to Guest? So can Host access Guest storage when both are running?

#4 Is there a way/function to autostart the Guest VM at specific timing and shut down/sleep as well? Do Guest VMs use resources if they are in sleep mode?

#5 Would everything work w/o a KVM as i am using a display with multiple/switchable inputs? Am i correct to say kb/mouse pass through is done mostly for those gaming on a Windows guest for better accuracy/response time? My previous experiment i did not pass through kb/mouse and i got stuff done fine on the W10 guest but i thought best i clarified here?

Thanks to anyone who could help me out with the above ya! Cheers ^^

You may want to search for the pentultimate VFIO guid on the forum but to quickly answer your questions:

#1: Yes

#2: You can use shared, you can pass through specific hardware. If you pass through the GPU and it has an audio device, that device belongs to the guest. You can pass through individual USB devices, so passing the DAC off to MS Windows is a thing.

#3: Storage works as complicated as you want. If you do not pass through an HDD, then your MS Windows install is more than likely going to exist as a container file on the host’s file system. IF you have an existing MS Windows install, you can just point the VM to boot from that partition. It is highly recommended to dedicate a drive (read not partition on the same as host) to the VM guest for IO reasons and ease of maintenance.

#4: Yes. you depending on what you are using as a VM Hypervisor solution, it may allow you to set that. Otherwise, learn how to use CRON and/or startup scripts to do this.

#5: Yes. Depending on the VM Hypervisor solution, it can take care of that. Worst case scenario, you can use SPICE to share the device seemlessly between Host Applications and Guest applications.

As mentioned, there are a lot of threads on this but look for the Ultimate VFIO guide. It is geared towards using looking glass but the basic setup will get you where you need to be.

3 Likes

Hi Mastic_Warrior ! Ok about storage what is usual practice for latest/current hardware? I ask as my old hardware has a SSD as fastest drive and all this NVME thing is uncharted waters for me haha! Ok suppose i reuse my old SSD - do people install a linux host on NVMEs or SSDs and which for the Windows guest or it does not matter?

Honestly, it does not matter.

So here is what I do at home and at Work. I only have SSDs, I am too poor for NVMe capable hardware /s

I have one SSD with the host OS. For temporary VMs, I just create a QCOW file and point the VM to that. The QCOW lives on the HOST drive.

If the VM will be a long term VM, then I actually use mdadmin and LVM to manage a pool of storage between other SSDs in my systems and then carve out a chunk of space from there for the VM to live.

So in your case, you can play around with the VM by putting it in a cow or qcow file and once you feel you have it setup the way you want it, you can write that qcow to a physical disc later. Running you VM from a file instead of a real partition is slower (marginally when we are talking SSDs) but it is a safe way to manage VM space and space usage without messing with partitioning if you are not strong on that subject. Hence why it is a good idea to have another HDD/SSD to use for VMs so you do not risk blowing away the HOST OS’s partition(s) by accident.

1 Like

I only purchase gaming mobos now that have ALC1220 built-in. I can pass that through to a gaming win7 VM (even x570 boards) and get 8 channel working in games. (or 8 channel 192bit from blu ray disk, etc)
The asus xonar U7 or similar USB card will work also for gaming in passthrough setups both using dedicated USB controller passthrough (recommended) or just using virt-manager to assign USB device to VM.

For windows VMs from what I have seen the virtio storage driver has crashes due to race conditions. So you must use sata or scsi fake controller. I use that with RAW image format. You should first install windows to the VM using it in standard IDE mode, then attach the virtio-win-0.1.160.iso (or newer) to the windows vm as cdrom and install drivers. Then once you attach a storage with sata port and it installs drivers, then you can switch windows c drive to sata mode after shut down of the VM. You can easily mount VM raw images on host as virtual drives.

Using the method of creating virtual disk you can put the windows install image on any sort of special linux filesystem that you want, including mdadm software raid5, raid0, nvme raid0 (lol), etc.

You can pass through modern nvme drives into windows 7. Once I hooked up a plx card and saw 4 drives directly in win 7 (lol). This would have fast gaming performance, but a linux guest with virtio on NVME host raid would beat it.

For non-gaming VM usage you can install spice guest additions in windows VM. https://www.spice-space.org/download.html
Then you will have shared clipboard and can auto resize screen to match window. In windows VM you can add multiple QXL displays for multi-monitors on your VM. I use this at work to make my windows vm have dual monitors. Each monitor window can be any size or moved to any monitor that I want. You need to launch remote-viewer for this. You can use a script to launch the VM and remote-viewer. As the name implies you can use remote viewer for a vm on another pc.

Also look up the virsh commands in general for starting/stopping/suspending vms. If you don’t do passthrough you can basically “pause” a vm at any time, reboot your pc and then unpause the vm later, to right where you left it. If you use passthrough this could be trouble.

Some older devices don’t do passthrough so well. That old amd card will probably have iommu issues AND that card will probably have compatibility problems on a ryzen mobo, forcing you to switch it to a lower PCIE speed. I recommend a low cost polaris gpu like rx 450/550/560 for the guest gpu.

That old ass nvidia 8400gs card will also probably have compatibility issues on a modern linux/ryzen system.

You should start by dumping the guest gpus bios. You might have to boot CSM mode on first.

Make sure to properly install OVMF before getting started. Don’t even install the vm until you do. On Ryzen you need to set several bios options for iommu if you purchase it. Make sure to use qemu/kvm system and not user session. Otherwise passthrough stuff wont work. (at least not easily)

For ryzen you should purchase only x370/x470/x570 boards so you can use the dual 8x slots for separate iommu groupings. Most other mobos will limit how you can plug in gpus. You can boot up to a host GPU in a 1x slot, leaving faster slots free for gaming. You can cut a 1x slot with a dremel to mount a gpu on it. See that linus video on the correct way to do it, vs the “destroy your motherboard” way. You only get one try on each slot, unless you can solder good.
The usb controllers on ryzen mobos are excellent for passthrough.
Usually on ASROCK ryzen mobos the 2 ports directly next to audio are in a separate group, so this is good for VM usage.

For a linux guest you can now use virgl to run opengl inside of a guest (games, CAD, whatever). Set Display Spice listen type to “none”, select openGL, select video virtio 3D acceleration, then boom done. Windows support might happen this year. Vulkan support might also happen this year.

1 Like