Increasing VM Performence: Attempting to Achieve Workstation Responsiveness

Disclaimer: I’m pretty entry level at virtualization, but I’ll try my best to understand.

Increasing VM Performance

I’m looking for sources on how to setup VMs so that I can mimic a workstation experience. I’m told that it’s possible. My attempts in the pasts have left me with VMs that are too laggy to realistically get any work done.

I’ve Googled and found some results, but they were super confusing. I need a source that covers start to finish. Hopefully in terms a newb can understand.

My Setup

Proxmox on a Dell R710. I’ve virtualized FreeNAS with hardware pass-through drives. I also have a PiHole server, and some Fedora servers.

I really want to setup a virtualized Windows 10 so that my mom can remote in and run Windows only programs (she has a Mac). But if the VM is suuuper laggy, then she won’t use it.

Thanks, Ethernet_Warrior


List of Resources for Future Readers
https://pve.proxmox.com/wiki/Windows_10_guest_best_practices
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.173-2/
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

Well, without knowing what virtualization software you’re using, I think I’d start by disabling all the services you don’t use in the VM Guest.

Windows 10 is bursting with useless and undesirable services, so there’s a lot of gains you can get just from that. Maybe the lions share.

I’m running Proxmox as a hypervisor.

I’m not experienced with that, specifically. Hopefully someone else on this forum knows the best way to run that within the host.

Look for methods of running it with the optimal privileges, such as minimal rights on the host, and with efficient access to hardware that doesn’t introduce additional pass through risks. I’ve had loads of spooky problems with that using other hypervisors.

Also, this is probably obvious, but also disable functions in the Windows 10 UI using it’s awful interface, and script that if you think it’s worth it.

Proxmox - Debian, KVM/Qemu, and a fancy UI.

1 Like

The closest that you’ll get to native is running VFIO drivers wherever possible. It let’s the guest know that it’s not running on bare metal and allows them to talk to the host in a much faster way as you don’t have to lie about that and do a ton of translation if VFIO is supported. A great example of this is the networking normally supports VFIO without issues. If you are using an nvidia GPU to pass through, many of these are “fixes” to make them work require turning on all of the lying and losing perf. Otherwise, VFIO all the way.

EDIT: Here’s a great page on how to get a GPU to pass through to a guest using Arch Linux. It’s focused on gaming, so there are a lot of performance tips in here. Make sure of course, you look at the proxmox docs on how to do what you read about, but generally it’s a great source of info, your interface is just different on Proxmox. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

1 Like

Remote, that’s your bottleneck. Baremetal or virtual is going to feel the same through RDP or Spice.

1 Like

Thanks for the feedback.

So far the census is I can’t achieve bare metal.

What about thin-clients? I’m to understand they host being displayed on the client is usually a VM. From my experience using one, it’s pretty smooth. Although it wasn’t able to support video acceleration. Doesn’t seem that much different than remote.

I think you may have baited some folks with the topic title. There’s a group of people expecting bare metal performance or if their VMs while gaming, there’s another group who care about storage/network performance.

Thin clients are basically RDP, or ages ago in another universe x11 machines. Of course it’s not the same thing as bare metal in functionality, let alone how responsive a desktop feels, or any kind of light gaming or high res YouTube.

If all that’s needed is the ability to run office or other domain specific non graphically intensive software, spice/RDP is probably fine and you probably don’t need to bother with any kind of passthrough.

If you want more, it gets tricky. (how’s Looking Glass looking today when it comes to remote stuff?)

1 Like

Noted, I’ll change the title.

Personally i have an old HP Z600 that I have running Ubuntu 19.10 with Virtualbox and run VM’s on and connect to them using AnyDesk. I am not sure that would work for you but what it does for me is this: I can run Windows 10, Ubuntu, and Mac 10.14 all at once (no acceleration) and connect to any of them from anywhere using my phone as as hotspot. If I am on the local network then its quite fast - I can watch videos without a problem, but of course there is some latency between clicks and action. That said, it is super nice for poor old me as I have a shitty Atom Z8350 laptop that cost me less than $200 but with it I can connect to any of those VM’s and run Windows or even Mac OSX with a touch screen and with 8 cores/16 gigs pretty well. It fantastic for doing work at the cafe without needing to bring anything with any real power (read:cost & weight) with me. Not sure thats what you’re looking for but as I said, it works really well for me.
P.S. - something I had thought about doing was to try to make an ubuntu or arch build witho ONLY anydesk on it which opens on boot (include the bare minimum to run AD) so that when i turn on one of these thin clients I can just immediately connect to one of those VM’s since it works well enough for me that way. I haven’t done it yet but am thinking about trying it out, though I am sure someone will tell me why its a terrible idea, I may just try it anyway…

1 Like

What do you use for storage? I’m thinking my problem might stem from poor disk IO.

I’ve got a pair of super cheap 120GB Kingston SSD’s with my VM’s saved on a separate pair of 480GB SSD’s from some no-name brand which are a few years old now. The pair of 120’s are in RAID0 running Linux and the 480’s are in R0 as well running Windows 10. I save the VBox disks on the Windows drive array since it has more room with my thought also being that PERHAPS being on a different set of disks might give them a little more leeway in terms of I/O and therefore performance but frankyl I’ve never tested and doubt there’s of any difference if I had the VM’s on the Kingston array anyways. All four are going through a PCIE 2.0 X4 SATA card with a RAID controller but not a good one with a battery - I am talking like a $20 from Newegg thing with some cheap Marvell Sata chip with 4 ports. I’m simply using this to get closer to the full SATA 3 speeds since the HP Z600 only has SATA 2 ports on its MOBO its so old. Honestly if you’re using ANY SSD’s for the VM disk then I would guess that isn’t your issue though I suppose maybe. Somewhere I downloaded a QEMU/KVM Guest Additions disk with drivers on it tp use with guest VM’s and my understanding is that if running any Windows that you should use those but honestly since I use VBox anyways with those guest additions I don’t know it from experience, just something I saw once. Are you using that? Might be worth looking into. Sorry I don’t have a ton of QEMU/KVM experience, I got fed up with error 43 on a shit Nvidia card and gave up on it and went back to VBox (where I also couldn’t do passthrough, but was more familiar with).

1 Like