So, I’ve been looking into configuring a system to where Linux is the host OS, but when the system boots, as far as the user is concerned, Windows is being ran. This would mean GPU & USB passthrough, hiding a GRUB menu unless a key is pressed during boot, and somehow giving Linux a “remote only” desktop. As well as the machine having two IP addresses. One for the Linux host and one for the Windows guest.
I feel like this is very doable, but it seems like a lot of specific customizations. In terms of what Distribution to use, I’d be split between Arch (due to a small footprint and the latest updates for security) and Fedora (stable and usable without knowing much about the command line, but more software that wouldn’t be used and would need to be upgraded yearly).
The point of doing this would be to allow for system snapshots, and ease of maintenance on machines. Cloning a Windows installation without issues is harder than just moving a snapshot of the OS. Right now, I just recreate the partition table manually in a terminal, then clone each partition using dd. This works in that I never see a “Activate Windows” watermark, but moving a snapshot would be less tedious.
So given all this information, my question is: Does something like this already exist? Is there a Linux distribution specifically for this or geared towards headless virtualization?
I’m actually tempted to use Fedora Server OS because it has a very nice GUI that’s created by default for monitoring the system. An example:
This isn’t perfectly ideal, as a GUI for managing snapshots and checking strange network behavior would be best, but I don’t expect to get everything I want all at once.
It’s doable, maybe not fully headless since you’re still going to need X, but nothing is stopping you to autorun a script at boot which fires up the vm in a dedicated x session.
If you’re concerned about security i’d suggest you go with fedora. Rhel or centos if you want better stability and lts (there’s also minimal installs available), since they can easily be setup to autoinstall security patches, plus selinux confines kvm by default to it’s own domain, something you prolly won’t be setting up in arch.
What I’d like to see is PCIe devices showing up in a menu on the web interface. I think that’s in their goals for the coming releases though, so I’ll just have to keep an eye out.
When this happens, they’re going to completely cut unraid out of the market.
Actually, this is from their 5.0 release notes. I’m not sure what they mean by PCI address visibility. I haven’t seen my PCIe compatible devices show up on the devices section of VM configuration yet… It’s possible this is just in preparation for full one-click.
GUI improvements
USB und Host PCI address visibility
improved bulk and filtering options
gnif, over in another thread is doing some really awesome stuff on the AMD side of KVM to improve usability. It also appears that we’re going to get a solution to the VM audio bugs as well.
Wow, that audio thread is a miracle. HDA audio in a windows guest has been an nightmare for ages now. Really hope he can get the proper changes pushed upstream.
Not without a second GPU. GPU’s don’t do “hot swappable” in the consumer space.
I’m using Proxmox now to install a Windows VM on a remote machine (it is sweeeet), but it looks to be made specifically for managing multiple VMs on any number of hosts. Rather than as a linux distro to use on its own.
Like, you wouldn’t back out of Windows to Proxmox. You would maybe switch from a Windows VM to a Linux Container to do other stuff.
The host system would need a GPU, however it might be able to use the built in GPU so that it is effectively invisible.
This works especially well on servers where the graphics chip on the motherboard can initialize for the host while still allowing full access to the GPUs.
Linus used unRAID for the XGamers1CPU videos, in which he used both on-board server graphics and a discreet low-powered card (seen in 2G1C).
After that it is just a matter of assigning the GPUs for PCIe passthrough in KVM. unRAID is designed to allow you to do this simply, but it shouldn’t be too hard to figure using just KVM, plus that may allow you a cleaner host machine for greater “invisibility”.
I really like this idea, but I’m not quite sure what it’s protecting a user against (ring -2 exploits maybe?). Hopefully it turns out to be a fun project
I found that I could only see USB Devices, not ports.
So if all ports are empty, passing them through to the guest from the host was a pain:
plug -> lsusb -t -> unplug -> repeat. I also found that which type of device it was decided with “port” it got. Using a Keyboard+Mouse gave me a different bus/port than a USB Drive in the same physical USB port.
Confusing as can be. I’d rather just say “passthrough all the USB ports”, but I’d settle for “Pass through the physical ones”.
So, I’ve gotten pretty far. My Windows machine is installed in my test Proxmox setup. It has network connectivity. It can see my GPU in Device Manager.
However, my GPU is a GT 710. It definitely has a VBIOS that uses EFI. Nvidia doesn’t like virtualization, and I’m getting a Code 43 in Windows.
I’m not sure what I’m doing wrong. I’ve set the CPU to host, so I’m not sure what I’d set to make the GPU be fine and let me install the drivers.