So I want to use a Windows 11 VM on a LInux Distro of some sort, And I know of a few of the ways that I can do it now. What I need, is some advice, suggestions and most importantly, answers to my questions!
Q1. Type 1 or Type 2 hypervisor approach. For context here, latency is probably the most important thing here along with as much performance as possible and reliable stability. Mainly for gaming and editing. More secure stuff done on Linux
Q2. I need my VM to be connected to the internet, partially because I wanna live stream on the virtual machine it’s self (Or can I perhaps capture the VM on OBS somehow on the host machine?)
Better yet, can I stop or at least reduce Windows spyware using terminal?
Q3. Would a Debian or Arch based system work best for this approach. (Correct me if I’ve misnamed the difference in distro’s here).
Q4. Anything else I should consider here? Any useful thoughts and suggestions are welcome?
SO I’m looking into this and it seems cool. But I’m wondering something. I’d I dedicate my 4090 to pass through to the VM, it will games, and I can use OBS via the plugin, which is well and good… But the questions comes, will/does the 4090 have access to normal encoding on the Host OS? Is that what the plugin does, Or does the plugin simply make it visible as a capture scene?
Or am I completely not understanding this and that this is the magical solution
Run QEMU and virtualize the OS. I prefer to install Windows baremetal on a SSD then boot to Debian and use the entire drive for the VM to optimize performance. If you wanted resiliency, use a qcow2 virtual drive file so you can make entire backups in a few minutes with the VM powered off.
Uhh can you go into that more perhaps? I understand half of that. SO with QEMU, that’s for a type one hypervisor system right? Like it’s different to using KVM and virtual box or am I wrong?
Also afer looking more into the matter still I’m thinking that ZKVM may be the way to go, because it seems more secure and performant for my needs, once again being gaming and editing.
KVM is a mode for the application “qemu”. Libvirt is a common frontend for all sorts of VM applications.
I cant speak about virtualbox or ZKVM as ive never used those. But libvirt with qemu is what i use for my gaming VMs, and seems to be the most popular.
Qemu’s performace is exactly what would be expected from the hardware, no overhead lag that i can perceive.
Qemu is officially supported by looking-glass, milage may very with other hypervising applications.
Personally, i have no use case for any hypervisor besides qemu, gaming VM or not.
I usually just vfio the storage controller and let windows installer VM install to it from the get-go.
Although ill likely soon change to block-layer passthrough. Backing up my windows vm in the current configuration is a huge pain.
Wish windows supported btrfs, so i could just btrfs-send the entire root fs over the network to the mass storage server. Works fantastic on my linux VMs.
vfio-pci is the kernel module that is bound to a device in order to pass it through to a VM. To pass my amd graphics to my vm, i force the card to not load the “amdgpu” module, but instead force it to bind to vfio-pci.
If the device obeys the PCIe standard, then libvirt is smart enough to automatically bring in vfio-pci and unload the normal driver on-the-fly. And it can also automatically bring back the normal driver and unload vfio when the vm shuts down.
The vast majority of (AMD/ATI) graphics cards do not obey the PCIe standard. If i had the money, i would sue them because it says “PCIe” on the box. its actually quite an infuriating rabbithole.
Wish the US would have some govt body to ensure companies cant lie on the packaging about what standards things are built to.
(when was the last time L1T forums had a good ol’ fashioned amd reset bug rant? was it the one i did last year? )
Nvidia is usually pretty good about this, but non compliant cards still exist.
My nvme suppports pcie correctly, i just force vfio manually as i never want the host to attempt using it normally.
Actually, while your here. What about audio devices. I’ve got an audio interface. Would I just pass that through? I know it’s kind of good to have a second keyboard and mouse for the host.
Betteryet, when you close the VM will my audio devices return back to the host OS?
i dont use hardware sound. but for windows VM audio, i use the audio transport built into looking-glass.
for linux VM’s i use pipewire’s rtp/udp transport.
Not sure how well that works for accelerated-audio. But my software sound with a cheap usb dac (16-bit, 48KHz, 2-channel) with realtime scheduling, works flawlessly. (Anybody know how to get Windows to do realtime?)
effectively, the VM audio output behaves as just another application (qemu) playing audio to the host’s sound system. there’s about a hundred other ways to accomplish this. vfio-ing a whole sound card sounds like overkill to me. But theres nothing stopping you from doing that.
Assuming they support pcie standards, yes, libvirt will give control back to the host on vm shutdown.
There is an (experimental) btrfs driver for Windows, and a recent L1T thread discussing how to install/use it.
However, what’s wrong with using btrfs on a Linux host, installing a Windows VM on it using a flat file for the disk, and using btrfs to manage the snapshots/backups?
Hey Sorry to revive and old thread but, While I understand the different between a type 1 and type 2 hypervisor, and the benefits to each. I’m still unsure how it interacts by my monitor. SO if I’m a type 2 one it comes up as an application, that can just drag and drop with my mouse like borded game… but how does it work again for a type 1? When I launch the VM, do I the host just kinda hibernate unaffected but the VM’s using? Does it also appear as an application like interface?
1 September 2024
I could be wrong, but KVM’s are type2. With the host kernel still being the final stop before actual hardware.
Somebody Please correct me if im wrong.
No, the host does not hibernate when you launch a vm with qemu. The host simply loses access to recources youve dedicated to the vm. You CAN allow the vm and host to share cores and memeory and even pcie devices at once, but doing so is either a really good idea or a very bad idea, depends on the circumstances, and should be considered individually case-by-case.
My vm uses cpu cores 3-6. Therefore no host application can utilize those 3 cores until the vm is terminated.
Big rabbithole of deciding wether to, and how to optimize, allowing specific recources to be shared.
Schedutil is my enemy. fifo pinning is the great savior.
NUMA is a headache.
QEMU is the virtualization emulator. KVM is a kernel module that turns the Linux Kernel into a hypervisor. They’re two separate components that can be used together. I have used VirtalBox on my Windows machine at work, and KVM is a much more performant hypervisor.
KVM converts the Linux kernel into a type 1 (bare-metal) hypervisor. This is in contrast to VMWare and Virtual Box which are both type 2 hypervisors. Microsoft Hyper-V is also a type 1 hypervisor - though not very good at it to my understanding.