So, I’ve been looking into configuring a system to where Linux is the host OS, but when the system boots, as far as the user is concerned, Windows is being ran. This would mean GPU & USB passthrough, hiding a GRUB menu unless a key is pressed during boot, and somehow giving Linux a “remote only” desktop. As well as the machine having two IP addresses. One for the Linux host and one for the Windows guest.
I feel like this is very doable, but it seems like a lot of specific customizations. In terms of what Distribution to use, I’d be split between Arch (due to a small footprint and the latest updates for security) and Fedora (stable and usable without knowing much about the command line, but more software that wouldn’t be used and would need to be upgraded yearly).
The point of doing this would be to allow for system snapshots, and ease of maintenance on machines. Cloning a Windows installation without issues is harder than just moving a snapshot of the OS. Right now, I just recreate the partition table manually in a terminal, then clone each partition using
dd. This works in that I never see a “Activate Windows” watermark, but moving a snapshot would be less tedious.
So given all this information, my question is: Does something like this already exist? Is there a Linux distribution specifically for this or geared towards headless virtualization?
I’m actually tempted to use Fedora Server OS because it has a very nice GUI that’s created by default for monitoring the system. An example:
This isn’t perfectly ideal, as a GUI for managing snapshots and checking strange network behavior would be best, but I don’t expect to get everything I want all at once.
It’s doable, maybe not fully headless since you’re still going to need X, but nothing is stopping you to autorun a script at boot which fires up the vm in a dedicated x session.
If you’re concerned about security i’d suggest you go with fedora. Rhel or centos if you want better stability and lts (there’s also minimal installs available), since they can easily be setup to autoinstall security patches, plus selinux confines kvm by default to it’s own domain, something you prolly won’t be setting up in arch.
This is why GPU passthrough would be used. Then I wouldn’t need X at all unless I’m administering the system.
Brainfart. Yeah true, sounds like a headless host would be pretty simple to setup then, all you’d need is to autorun a script at boot.
On that note, here’s 600 pages on the subject. Beats arch wiki imo lol.
Proxmox will do exactly what you are looking for:
If that’s not the most perfect thing. Thanks.
It’s definitely not the most perfect, but it’s nearly there.
What I’d like to see is PCIe devices showing up in a menu on the web interface. I think that’s in their goals for the coming releases though, so I’ll just have to keep an eye out.
When this happens, they’re going to completely cut unraid out of the market.
Yes, between this and Fedora having “one button passthrough” on their roadmap, I’m excited for the future of passthrough on Linux.
Actually, this is from their 5.0 release notes. I’m not sure what they mean by PCI address visibility. I haven’t seen my PCIe compatible devices show up on the devices section of VM configuration yet… It’s possible this is just in preparation for full one-click.
- GUI improvements
- USB und Host PCI address visibility
- improved bulk and filtering options
gnif, over in another thread is doing some really awesome stuff on the AMD side of KVM to improve usability. It also appears that we’re going to get a solution to the VM audio bugs as well.
Wow, that audio thread is a miracle. HDA audio in a windows guest has been an nightmare for ages now. Really hope he can get the proper changes pushed upstream.
Looks like his fix, while working, doesn’t match quality standards, so it’s probably going to need some work to get it to 100%.
It will get there eventually, but until then, it’s childs play to make a package. I’m thinking about doing a copr package for this fork.
Would this allow me to perform hardware passthrough on a single GPU by running Linux headless?
If so, is there a way where I can simply turn off the VM and go back to using Linux regularly?
And would this work also on an iGPU from Intel or AMD and not just a dGPU?
So many questions now.
Not without a second GPU. GPU’s don’t do “hot swappable” in the consumer space.
I’m using Proxmox now to install a Windows VM on a remote machine (it is sweeeet), but it looks to be made specifically for managing multiple VMs on any number of hosts. Rather than as a linux distro to use on its own.
Like, you wouldn’t back out of Windows to Proxmox. You would maybe switch from a Windows VM to a Linux Container to do other stuff.
Oh…but does Proxmox itself run headless (Without a GPU) while the GPU can go to the Virtual Machine?
Installation is the following:
- Get ISO
- Put on USB/CD
- Put media in machine to be installed on.
- Boot to said ISO.
- Set password, and IP address.
- Set host file system type.
- Click Install.
Then you just go to the IP you set, and you can manage the machine from there. From the host’s perspective, you only get a terminal.
The host system would need a GPU, however it might be able to use the built in GPU so that it is effectively invisible.
This works especially well on servers where the graphics chip on the motherboard can initialize for the host while still allowing full access to the GPUs.
Linus used unRAID for the XGamers1CPU videos, in which he used both on-board server graphics and a discreet low-powered card (seen in 2G1C).
After that it is just a matter of assigning the GPUs for PCIe passthrough in KVM. unRAID is designed to allow you to do this simply, but it shouldn’t be too hard to figure using just KVM, plus that may allow you a cleaner host machine for greater “invisibility”.
I really like this idea, but I’m not quite sure what it’s protecting a user against (ring -2 exploits maybe?). Hopefully it turns out to be a fun project
Proxmox does not require a GPU. After install you connect to the host IP, graphics are irrelevant.
What about USB ports?
I found that I could only see USB Devices, not ports.
So if all ports are empty, passing them through to the guest from the host was a pain:
lsusb -t -> unplug -> repeat. I also found that which type of device it was decided with “port” it got. Using a Keyboard+Mouse gave me a different bus/port than a USB Drive in the same physical USB port.
Confusing as can be. I’d rather just say “passthrough all the USB ports”, but I’d settle for “Pass through the physical ones”.
So, I’ve gotten pretty far. My Windows machine is installed in my test Proxmox setup. It has network connectivity. It can see my GPU in Device Manager.
However, my GPU is a GT 710. It definitely has a VBIOS that uses EFI. Nvidia doesn’t like virtualization, and I’m getting a Code 43 in Windows.
I’m not sure what I’m doing wrong. I’ve set the CPU to
host, so I’m not sure what I’d set to make the GPU be fine and let me install the drivers.