What if I want everything?

no, you need to have VT-d in order for this to work. you also would need another GPU for linux host to use (even something crappy like a r7 240)

what about the integrated GPU on the i7 4770k? And why would the VT-x not work? I am really wishing I had kept my 8120 rig.

VT-x should work, thats different extensions. VT-d wont work as its not supported on that CPU. now you could trade up to a 4790K as it does support VT-d but i got mine working with an FX8350, and MSI 970A-G46 (really shitty mobo) with the latest BIOS. as for using onboard GPU, it causes issues with GPU passthrough. you could maybe patch your way around it (https://bbs.archlinux.org/viewtopic.php?id=162768) (do a find for IGP) but i didnt try it as i didnt have that issue

i was thinking of using the iGPU for the host OS and the 7970GHz for the Guest.

until you have a CPU that supports VT-d, you wont be able to do any hardware passthrough

damn. Looks like I am stuck with Windows for the time being.

 

EDIT: Actually, I just remembered that I have a spare GTX 460 OC laying around. Could I use that for the Host OS and the 7970 for the Guest? Or do I still have to have a 4790k?

would still need the 4790K or a 4770 (non k) however, the extra nvidia card could be useful as it means you wouldnt have to really mess around with trying to do the patch to get the iGPU working for the host.

You get this much of a performance gain compared to which?

ie; what is your comparison based on? - hardware/software.

Windows vs Linux. I think Zoltan or Wendell saw a 20-30 percent boost. Can't remember who though.

Testing pinning are we? Sounds good!

I'm a bit late asking this, but how on earth is that possible?

Thanks for the guide btw, definitely going to try this after I'm ready to break my system again.

This sounds too good to be true. But i will have to try it out. The thing is everytime I use Linux I'm scared I will break something. Then I will not know how to fix it.

From a noob and take in to attention i don't know how much of a noob on linux you are , i'm loving just the fact of experimenting and learning from those mistakes , and imho if you follow the tutorials everything will be fine if not and you have more then one system you can come here someone is sure to help you .

P.S:as long as you don't go against their political views hehe.

At least you could fix it. If something breaks in Windows, all you can do is wait for Microsoft to fix it. Some things in MS-Windows have been broken for over a decade, and are still left unfixed, and there is nobody that can do anything about it, because it's not open source. In the linux world, if you're on a community distro, and you run into something that needs fixing, you go onto the support forum of your distro, and you get help almost immediately. That help extends all the way up to the source code, so for instance if you discover an issue that is due to a setting or bug in the code, the devs will change the code for you, and provide a patch, mostly within 2-3 days maximum, often within a few hours. Have you ever tried to file a feature request or bug ticket with Microsoft? If you have, you know exactly how Microsoft handles these things, namely you get ignored first, and when other users chime in to support you, you get a standard reply along the lines of "we'll look into this", and when a lot of other users chime in to support you, and Microsoft can't ignore the problem any longer, a so-called dev or security expert of Microsoft chimes in with a message without any technical details on how it can't be addressed for security reasons or because of third party hardware or software compatibility. You never ever reach the stadium of "conversation" with Microsoft, and people have gotten used to that, they forego the simple medium of conversation, expecting no feedback, but in open source, conversation is everything, you go onto a forum, and you directly have a conversation with the maitainers and devs of the code themselves, not with a PR person, not with a marketeer that pretends to be a security specialist, nope, with the people that have their names visible to everyone in the lines of code themselves!

Another point is that Microsoft-era IT-providers have developed a culture whereby people that meet breakage, are afraid to admit that they've experienced breakage, because it makes them look dumb. In open source however, breakage is expected to occur, because it can't be excluded in any reasonable logic, because it's software, it has bugs, that's just how it is. In open source though, the source being open, anyone can detect those bugs, report them, discuss them, and if they feel up to it, propose a remedy for them. That's why open source software contains a lot less bugs than closed source software, and why less breakage occurs with open source softwares that are labeled "stable" by a community of users... because the devs don't label software "stable" in open source, unlike in closed source. In closed source, the manufacturer says when a software is to be considered "stable", even if it can never get stable (remember Vista?), and people that don't accept that, are treated like stupid idiots that can't use a computer. In open source however, code always comes out as unstable, and then is tested by a hard core group of users, that debug the code. If that hard core group of users says the code is stable enough to release for testing by a larger community of users, the code is released as "testing" or "development" software, and a pretty large user base, with users of all skill groups and running a wide variety of hardware, will again test the software, and report their experience with it. This feedback is an essential part of the open source development method. Users are not ignored, because they are a vital part of the development process. All the bug reports and crash reports are public (although no private data is ever transferred), so anyone can track the bugs and the bug statistics. The developers have to sort out all of those bugs before they can move on to the next phase, and label the software as stable, where it will be released to the general public.

Why is open source software more technical then, since it's more streamlined and of a much higher quality standard? Good question? Why would anyone do that, provide such a variety of software and so much settings and power over the system? It's because the users demand that. There are Linux distros that have tried to launch with the opposite principle, for instance Android, other example is Ubuntu after they switched to Mir. They tried to free the user of the burden of settings. The result was that a lot of users just left, and went to distros that did provide the power tools they required. A lot of Ubuntu users went to Manjaro, to Debian Testing, to Xubuntu, to Kubuntu, to OpenSuSE, etc... and a lot of Google Android users rooted their phones to switch to AOSP-based forks, like Cyanogenmod, Carbon, AOKP, Paranoid, etc... because people, once they get used to having the power that open source software offers them, don't want to let that power go, they are not dumb, they see the benefit of having the power and the control. It's intimidating at first, but once you actually use it, and have run into a few problems that you have to solve and have to research, you discover all of that power and all of those features that just don't exist in closed source software, and you can't live without them any more. A lot of criticism against linux and open source is by people that haven't ever tried open source, and judge it based on maybe 5 minutes to a few hours of experience with some hyped open source thingy that sucks balls, but they don't know what they're talking about really. They are into the "gamification" of software, which is the biggest selling point of all software and hardware since the iPhone came out, but they see "gamification" as iOS or Android, and in a very bad attempt, MS-Windows 8.1, have launched the concept, as less choice for the user, and more power for the provider. The "gamification" of linux and open source has always been there, software was always a game as long as it was open source or open source-ish. It disappeared with MacOS and MS-Windows. MS-DOS was one big game, UNIX was always a huge game, Linux is a huge game. It's a more complex "open world" game instead of a "linear" "pay-to-play/win" game like MS-Windows, but everything can be changed, everything can be configured, it's a huge source of experimentation, and if you break anything, you can reinstall from scratch without destroying your personal data in your home partition in less than 5 minutes, and you can change maps or games without losing your data, or without destrying your other maps and games. Open source doesn't require "gamification" by some daft marketing experts, it has always been about open world exploration and construction, it has always been a social game that is never played alone. It's just fun. That's why only people that haven't really tried it, insist on simplification or "gamification" or "unification" or "whatever-ication" of open source, because they think that by "-icating" it, they can control it, they can cheat the game, they can pay something to someone to take care of things or something... but it doesn't work that way... Ubuntu tried the gamification and commercialisation of user data, all they got was a giant shit storm. Simplification and unification are not what open source users want, they are quite satisfied with privacy, security, power, efficiency and control, all of the things they don't have in closed source software, and that are cooler than gamification and pay-to-win.

You just have to take the jump, dive into the deep, go for it. The first week, you'll have to unlearn quite a few things that you've learned by using commercial closed source software, that aren't really right, things that represent a skewed image of the reality of computing. After that, you won't have breakage, you won't have problems, you will have developed the necessary game-sense of open source to control the system and make it do what you want it to.

Great thread, have recently jumped into setting this up on top Debian Sid, passthrough with my r9 290 works great(cpu: fx 8320). Was unable to get PCI Passthrough working via virt-manager. I had to configure it manually, based off this: https://bbs.archlinux.org/viewtopic.php?id=162768 .

I saw this mentioned in the thread but didn't get an answer. Is it possible to assign two physical cores to a single virtual core?(in this case, 8cores to 4vcores) I'm interested in seeing if there could be a a performance benefit to doing so on a CPU that has excess, weaker cores in games that can only utilize 4 or less cores.

What Motherboard are you running just curious I have the same CPU and GPU.

Asus m5a99x evo (not r2), also using a gtx 570 as host gpu (silly i know, but only spare gpu lying around)