Project Idea: Gaming on VMs using the new Haswell-E processors

I've been lurking around on the forums for a while now and recently created an account to participate in the X99 motherboard raffle. I was wondering what kind of system to build with it if would actually win. What the hell would I do with all those cores? Then I stumbled upon the HYDRA multi-headed virtual computer project.

I've been looking into virtualisation but haven't had the opportunity to game on a VM like this guy. I'm still rocking a Core 2 Duo and it lacks all of the fancy virtualisation stuff required to pass through hardware.

There's also a Steam discussion where the HYDRA guy explains how he can run multiple instances of SteamOS, each with their own dedicated graphics card. It's basically a 5-man LANparty in a box. I really like the concept of turning a beefy SLI/Crossfire machine into a gaming mainframe.

I have some old videocards and peripherals laying around to maybe build a proof of concept but I think you guys have access to the resources to take this to the next level. It would also be an opportunity to explain how type 1 hypervisors work and the crazy things you can do with virtualisation.

(OFF TOPIC) Have you considered the LGA771-LGA775 mod? the x5470 is around $50 two extra cores, 12mb cache, and i'm sure it could virtualize significantly better than a regular core 2 duo.

... Wow just read up on the HYDRA, awesome stuff hope someone can discuss  more on this topic.. off to go read some more.

The 680i motherboard will probably still cause problems even with a CPU upgrade. I couldn't even get Hyper-v to work on it. I don't feel like investing in such an old system and rather save up for something completely new.

well there are easier ways to accomplish this, I have done similar things (but for work) on VMware VSphere

I have been running many servers on this but there is no problem in actually running 1 strong or few strong from 1 machine, it also can run on normal hardware but i recommend having nas unit on 10gig or 1gig base.

This has been discussed extensively on the forum over the last year or so.

Virtualization is an answer to many problems.

I missed that article, the title isn't very clear. I haven't had the chance to read through everything but it seems you know your stuff. I've been looking into running a lightweight Linux distro as a host for virtualisation but it's a bit intimidating. I didn't know about vSphere before I saw the HYDRA project and it seems more intuitive to set up. It's also backed by a huge commercial company so the support should be more mature.

Have you used vSphere and ESXi before and what's your experience with it?

It would be nice to have a video that covers all the information about virtualisation on the forums. Maybe you can help out with the writing/production? I also think it's a great opportunity to show the real power of all the upcoming enthusiast products and spark some ideas for new builds.

Total CPU Resources: 140 GHz. Nice! But what about GPUs?

Can you tell something about the hardware being used? What's running on these machines?

Its using ibm bladecenter chassis filled with blade servers(10gig base connected to nas and vcenter vshpere); They do not have dedicated GPU's but there is no problem to drop dedicated GPU server and do same thing.

Each bladecenter chassis have 14 blade servers; each have around 16GB of ram and 2x Xeon Quad-Cores; no hdd's on them.

There are numerous web servers on this one; I currently have 5 of those (2 web, 2sql, 1 custom) also whats best about it say one of blades crashes - there is practically no down-time (5-10sec) if one of the vms was using that hardware.

-- This works better, faster than real hardware, mostly because its using NAS as its storage. -- To fully utilize there is awesome tool from vmware that keeps only 1 copy of same file on multiple vm's (means you install 50 vm windows servers - he'll only keep 1 copy of same files) - we have another dedicated nas for backups though.

Beyond that to get proper cloud you can buy and use inifiniband but in reality its not stable ... :( this is better.

Shit just got real.

tl;dr: Nvidia has partnered with VMWare to create virtual GPU support and Google to offer a thin client platform.

You'll pardon my lack of enthusiasm: VMWare is the most closed source, less reliable (PSOD aka "Purple Screen of Death"), less performing virtualization solution, that is popular only because it delivers a great deal of closed source expensive Windows administration software, and nVidia is about the worst company ever when it comes to hardware passthrough, in fact, whereas there are almost never any problems with hardware passthrough on ANY AMD hardware (CPU, chipsets, mobos, GP-GPU's), and AMD has made the address translation and memory pool and access functionality of the major open source hardware virtualization standards into it's own standard (e.g. no more X-Fire bridge necessary, the cards communicate over the system bus just like hardware passthrough virtualization applications), there have been nothing BUT problems with nVidia cards and hardware passthrough virtualization.

In fact, nVidia had already in the past made a deal with RedHat in order to leverage virtualization technology to bring them closer to what AMD and Intel can do with GP-GPU support, and when RedHat delivered, nVidia blocked the entire technology as much as they could and demanded the integration of proprietary code into open source software to enable GP-GPU functionality on nVidia cards.

So I'm not buying that article, I believe it's just another misrepresentation of the most evil kind. Anandtech is also very pro Intel/nVidia/MS-Windows, they are very commercial, not very objective, and have no idea of latest (open source) technologies, they just publish the benchmarks that misrepresent the reality without even twitching lolz. They're even worse to a certain extent than typical commercial divas like Linus Tech Tips or Barnacules. In fact, it's surprising that Newegg TV, which is a department of a commercial retailer, is often more objective (even though often between the lines) than Anandtech, Linus Media Group, Barnacules, etc..., which are basically just animated ad pages.

We have a video coming up where we actually do 'remote gaming' with 'gpu virtualization' using the bleedingest edge stuff possible.

Skyrim was barely playable at 720p. Barely. Under ideal circumstances. Real world of the shitty US internet means no. Perhaps if everyone had google fiber 720p gaming might be okay, but no.

In an ideal world it will be close to as good as with something like the nvidia shield but the faster you want it to go, the less the client knows about what's going on with the graphics and the more of the game the client has to send to the server, or the game has to run on the server and 'stream' to the client.

I agree with zoltan that it's all fluff.


BUT. The windows VM on linux and pass-through the good graphics card to windows for gaming. That's almost ready for the masses. It's pretty good once you get it setup. Impressive I'd say, actually, some games run better in a VM than they do on native hardware.

More on that later...


We ran a test to make a point at Gamescom on an Intel mobile 4702QM laptop with an AMD dedicated mobile graphics card and the Intel HD4600 in the chip: CPU temps when playing CS:GO in a Windows kvm container on OpenSuSE with PCI passthrough: max 54°C, temps when playing CS:GO on bare metal Windows: max 60°C. That's a 6 °C difference! Differences in fps shown in netgraph in CS:GO were inconclusive, they were pretty much identical, and there sure was no noticeable difference in performance. However, using a Zowie EC2 (which can switch USB polling frequency without software support, and the polling frequency was set to 500 Hz or 2 ms, whatever you want to call it), and a PS/2 keyboard, which guarantees an identical input load throughout linux and windows, an MLG player was asked to tell on what system (without him knowing which was which) the input lag was lowest, and he chose the kvm system, which is surprising because X server is in between the mouse and the kvm container, whereas on the bare metal Windows, this is not the case. In both cases, CS:GO was set to raw input, no acceleration, just over 900 dpi (measured) into a sensitivity of just over 2.5 and a zoom factor of 0.8. To be honest though, I didn't notice any difference in input lag between both systems, I don't even know if there is a way to really measure it reliably.

When the occasion presents itself, I want to do a similar comparison on more mainstream desktop hardware and make a screen capture of it, but it'll take a while because installing Windows with all updates on bare metal is a very time consuming process, and I don't have any bare metal Windows installs at the moment.

Getting rid of Windows for competitive gaming would not be possible because of the way money works, but running it in a kvm container on linux instead of on bare metal would bring a couple of benefits, not in the least a better control over network security and system integrity, especially after a Russian team loses... (come on Valve, you should have seen that one coming...), and a huge reduction in trouble shooting time when Windows breaks again (you can restore a snapshot of a kvm container in literally a few seconds, it solves any and all problems without even having to examine what the problem is), so that there may be less interruptions of the competition because of Windows problems (which is a real problem).

I'm curious about the remote gaming video even though I'm more interested in having a local multi-headed setup. Imagine a steam machine that runs 4 SteamOS instances and multiplexes it onto a 4k television for local co-op. When you're alone you can use your excess videocards in SLI/Crossfire. That's hoping the new X99 chipset brings some better scaling for multicard setups.

A Windows guest on a Linux host is also something I'm looking into. Web development on Windows is a bitch but I still have games that are stuck there. Does something like a lightweight Windows VM for gaming exist without having to tweak it myself?

Do you have any write-ups of these experiments, a blog maybe I can follow?

The last paragraph resonates with me because that's how I want to set up my web development environments. It would be cool to seed machines between matches with something like Vagrant and set up a local proxy that caches all the downloads. You could store player preferences separately and just have them input their Steam credentials to launch their personalized machine. I think Windows licensing will be an issue though, maybe SteamOS will make this feasible when it's more mature.

I'm probably biased towards VMWare because that's what my former employer used and seems the most prevalent in the business world.

At the moment I'm just looking into virtualization to mess around with at home. I'm a software developer and my experience is limited. I'm also interested in learning something that I can apply in a professional environment.

Which Hyperviser would you recommend for a beginner? I do have some basic Linux skills from working with web servers.

I don´t think that gaming on a VM without AMD HSA memory model is a good idea


No, it was just to prove a point.

I run everything in kvm containers, not only Windows, but also Linux when it has proprietary code in it. It's a habit. I have a company that employs web developers, and to be honest, setting up a dedicated appliance is so easy and fast, I've never even thought about automating it. Right now, the appliance overlays are on the NAS, and everyone pulls them as they need them, when they need them locally, which is rarely the case. In general, most appliances run on a dedicated virtualization server.