The real question is: do you have two GPU's, and do you have a VT-d/AMD-Vi supporting system?
The issue with using Windows in a kvm container for gaming, is that you need to dedicate a gaming-grade GPU to the Windows container in order to get decent gaming performance.
There is a thread on the forum that explains how all of this works. If you have the right hardware, you can easily do a PCI passthrough.This allows your Windows container to directly access the gaming-GPU on a hardware level. Windows does run faster in a kvm than on bare metal, especially when you don't need Windows for anything else than gaming, and you can strip it down to the bare minimum, blocking all of the bloatware services and background crap.
Another big advantage is that you don't need to downgrade perfectly good modern storage capacity with Microsoft filesystem fail any more, as all of the storage space the Windows container needs, will be put in an overlay file on the linux filesystem of your choice, and the linux virtualizer is so smart that the many pointless zeroes that Windows (for a reason that expired 20 years ago) implements, are omitted when the data is stored, so you save a lot of storage space, that is dynamically allocated and can easily be snapshotted, which enormously reduces the down-time of your Windows system (which is one of the biggest problems of any Microsoft product in comparison to open-source products).
The performance of Windows games running in a kvm is generally better than of Windows games running on a bare metal Windows install, but there are a few games that are an exception, especially on systems with less available RAM. You need at least 6-8 GB of RAM on your system to be able to play games in a Windows kvm container without losing performance in some games. There are barely any 64-bit games for Windows, so there are barely any games that can address more than 3.6 GB of RAM, but some game-related applications in Windows also require a lot of memory, like the Steam client, which is based on an old version of Firefox, and therefore, depending on how much Steam "features" you use, may require quite a big chunk of RAM, and of course the dreadful "experience" software that comes with modern GPU's, which occupies a lot of RAM, as do proprietary drivers with "advanced functionality" for gaming peripherals. You have to count about 2 GB of RAM for a general use linux host system that is to be added to the Windows RAM requirements, but if you're going to do some serious work on your linux box, you'll probably need more than that. For instance, if you're going to go online in an lxc container for security and privacy reasons, if you're going to do development, if you're going to do CAD/CAM or non-destructive RAW Photo Editing, etc...
Generally, on AMD systems, hardware passthrough just works. AMD systems generally run linux very well, with some notable exceptions of particular Asus motherboards that fail to fully implement the entire AMD specification. On Intel systems, you have to be very careful, because there is a lot of hardware out there that doesn't implement the kvm/qemu standard. Even if you have a CPU that supports VT-d, the motherboard may not support it, as Intel - contrary to AMD - does not require the mobo OEM's to fully implement all of the chipset features. The general rule of hardware is, the more units sold of an item, the higher the likelihood that it will work. For instance, if you buy an Asus ROG board, which is a niche product, you're more likely to run into problems and less likely to get BIOS updates than when you buy a generic business-orientated mobo from them. Also, cheaper products sell much more units than expensive products, so don't buy expensive, because the chances of it being well-supported in terms of BIOS updates and compatibility are actually smaller. Especially for kvm with PCI passthrough, which is most used in an enterprise environment (usually to pass through dedicated NIC's), its a safe choice to go for a more business-orientated board than a gaming-orientated one. Most gaming boards only add performance value when you're using SLI/XFire setups anyway, usually the business-chipset boards offer better single GPU performance across the board.
An easy way to configure a PCI passthrough in a KVM is by using a virtualization manager like virt-manager in linux, because it offers you a really simple to use GUI where you can just "bind" a PCI slot to a container. When the container is not running, you'll be able to use the GPU in that PCI slot in the host system, or you can bind it to another container. Linux doesn't mind at all that a GPU, or any other device for that matter, is unbound from the system during operation, but of course, you'll want to have a display output for your host at all times, so you need a second GPU that the host system can continue to use while the gaming-GPU is bound to the container (which requires it to be unbound from the host system).
In practice, what you do is you connect either two displays, or one single display, to both GPU's. If you're using a single display, you can switch input easily with the dedicated button on the monitor, to switch between the host and the container. If you're hooking up two displays, you will see both systems at the same time. If you're doing other things on your computer while gaming, like chatting or whatever online or screencasting, you'll probably want two monitors.
A super easy way to have two GPU's, is by using the iGPU on Intel Core-CPU's or on AMD APU's for the host system, and a discrete GPU for the Windows container.
It's impossible to give a "how to" that would work for every system. It would be possible to show how it's easily done on an all-AMD system that doesn't use an Asus mobo for instance, but different tweaks are necessary for nVidia cards, Intel chips, Asus boards, different BIOS suppliers, etc... so you'll have to experiment a bit to get it to work on your system, but a PCI passthrough is still pretty easy to implement, unlike a VGA passthrough (the same with just one GPU), which is advanced wizardry and has to be configured through the CLI, and requires some scripting of the bind/unbind procedures to even stand a chance of working.
With linux kernel 3.17 also came the ability to pass through advanced external USB devices over/ TCP, for instance, and external USB HDD could be passed through to a container, not by sharing it or passing it through on a hardware access level, but rather through a network socket, which makes everything so much more easy and manageable from a security point-of-view.
There are also other options besides kvm, which is the standard open source virtualizer for linux. The most viable alternative is Xen, Xenserver also being open source. In terms of performance, Xen is not bad, and certainly much better (and also much more stable) than other solutions like VMWare, but even Xen cannot attain the same performance level as kvm. Kvm is also super easy to use because it uses direct language and terminology, you don't have to wrap your head around things like with Xen, a PCI slot is just identified by it's UUID, and you can easily select "bind" from a context menu in virt-manager, easier is really impossible.
Again, this is an evolving technology that has grown enormously in the last couple of years, so use a bleeding edge distro to do things like this, do not expect a smooth experience on conservative distros! The distro virtualization works best on in my experience, because all required packages are pre-installed and all the tools are available in the repos in the last version with most up to date patches and for the latest kernel features, is OpenSuSE. It has also always worked very well on Fedora, but I've seen some problems with kvm in Fc21 that I don't think are ironed out completely yet, but probably will be soon. It also works on Ubuntu 14.10 (without the latest kernel features for kvm that is, because 14.10 is still on kernel 3.16), but beware that Ubuntu does have a tendency to occasionally randomly crash X when messing with GPU's, and that - in good old Ubuntu tradition - these crashes are mostly unrecoverable, and require a system restart, so while setting it up, if you have to experiment a bit with settings, you might need more time than with a more stable distro.You might also have a go at it on Debian Sid, but then you'll have to search all of the necessary packages together manually. If you want to go at it on an Arch-based distro, I wouldn't recommend Manjaro, but rather Arch itself, and that will also require a lot of manual configuration. If you want to have a go at it on Gentoo, I also wouldn't recommend Sabayon, but rather Gentoo itself, again, requiring quite a bit of manual puzzling. RPM-distros, being enterprise-targeting, are really the way to go for the quickest result in terms of virtualization in my opinion.