So I plan on building a new system that will allow me to run Linux and do a GPU passthru to a Windows 7 VM for mostly gaming and to help ease some problems as I become more familiar with linux in general. I would like to get a bit of help if possible.
My current use is gaming, watching video, internet browsering, downloading, tho almost everytime i play with linux i find myself doing other things aside from those things.
My Budget is $2K USD but I don't really wanta blow it all on a CPU, Motherboard, RAM and there are still other things i need to research.
CPU I've been looking at I5/i7s and Xeons I'm thinking 6-8 cores is required. I know GHz isn't everything but I'm not really familiar with Xeon processors and seeing the GHz being so much less seems a bit worrying especially since games uses mainstream specs.
They seem quite similiar except the Xeon costs a bit more with lower GHz. I'm kinda favoring the i7. Steps up from the i7 seems to be a quite expensive. They both appear to support virtualization.
Motherboard Since the CPU socket is LGA 2011-3 it seems most chipsets are X99 and for server side C612. I haven't really been able to find if these chipsets support the virtualization extensions needed or not. Would love to know. Assuming X99 does support it, would I still have to check the motherboard and see if it supports them too?
Second Graphics Card I also have a 4K monitor which is currently powered using a GTX 760. I know i will need another GPU for the passthru but a bit curious about this area. Would I need the second card to also support 4K resolution?
SSD So I was thinking about picking up a SSD. Would running a guest OS or Linux Host on a SSD degrade them faster? I'm like super not familiar with SSDs at all.
Using Nvidia you're gonna struggle, other then that just make sure you got a bunch of ram, and atleast 4+ cores to spare for your Guest os(Id advice against less then 8 actual core e.g. not 4 core HT cpu). The setup is kind of easy other then releasing the GPU, atleast from my experience. And basically just requires some .img files which emulates the harddrive, and some standard setup else wise. Your host OS is going to dislike you fiddeling around with the NVidia hardware/software so be prepared for some grey hairs, and abit of yelling and screaming at the monitor.
I applaud you for planning ahead, first let me say I have been running a setup like you want to build for over 18 months and have found it to be very good at a lot of things and not so good at others, not that it's a problem but there are things you need to be aware of and understand.
If you go about this with the understanding that you are going to build two systems in one case by using both physical hardware and virtual hardware you'll have a better understanding of what is needed for the host and guest systems to be happy and robust.
Since you want to game on your guest side (win 7) there are things that you will want to provide to the guest as physical passed through hardware that will be blacklisted from the host system, in my opinion these items will of course include the GPU you want to pass through but also a NIC, a sound card, and really a entire USB controller so you have that connectivity in your guest that isn't shared with the host, of course these are all optional with the exception of the GPU but if you want to have a guest that performs like a bare metal install you will want to consider what I'm suggesting.
So I'll start out giving my opinion on the components.....
Cores....the more the better, if your going to go with a Intel CPU build then my suggestion would be a 8 core product like the....
The reason is it's 8 cores /16 threads, in this type of application clock speed isn't as important as the number of cores/threads because you will be virtually sending cores to your guest system and the more you can provide the better the performance. Along with the CPU cores you will be sending memory to the guest virtually so I'd recommend 32g i your system.
The motherboard is going to make or break your system, along with the compatible chip set you will want lots of PCI-E slots, lots of USB ports/controllers, a decent on-board NIC and sound card for your host system, you also want options for the future, you just don't want to put yourself in a position where you say I wish I'd done this or that because most of us have had to say that on our first pass through build.
The GPUs you use are totally up to you, there are issues using a Nvidia GPU in a virtual environment, these issues are created by Nvidia on purpose and intended to make you spend more $$$ buying a Quadro card and not use a normal desktop GPU, there are work-arounds to make it work but do come with a slight performance hit to the guest environment..if you plan for it it shouldn't be much of a issue. If you plan on gaming on both the host and guest you will need two cards that will satisfy your needs. I guess it goes without saying you will need a second monitor to do this but I'll mention it just in case you don't know.
Personal opinion.....I'd use AMD GPU (s) at least on the guest side of things but that is just my opinion.
Drives...... It really doesn't matter if it's a SSD or spinning platter drive, the thing is you want separate drives, one drive for the host system and one for the guest, of course the faster the drive the better the performance same as any other usage, but you want to keep the latency down by giving each it's own physical drive, you can partition a drive and give 1/2 to each but you will find that the two systems will be fighting each other for access thus slowing both systems down, the same thing happens when you share a NIC or sound card, both are sending info waiting for each other to hand off control causing latency.
A couple other things....input devices, you can share your keyboard, mouse, game pad, etc, you can use software like Synergy or a mechanical KVM switch, or you can just double up and have separate devices for each if you have the room. The case you build in is also important, you want lots of space and lots of cooling potential, it will be jam packed with stuff.......then don't scrimp on the PSU, you will want a high quality PSU that can run all your stuff so study your parts list then buy a PSU that can provide what you need wattage-wise and have a little capacity to grow in case you decide to go AIO cooling or some other upgrade that your not considering today.
The last thing you need to consider is the host OS, some Linux distros are very well suited for virtualization others are not so much, the thing is if your going to be learning Linux at the same time your doing this you will want every advantage you can get so pick a distro that will cause you the least amount of grief , in my mind that is going to be OpenSuse, Fedora, or Arch (Arch if you really want a challenge). Personally I use Fedora and it has been very good to me, it is a modern up to date kernel that is constantly evolving, some distros will require you to patch the kernel to get the needed up to date support for virtualization and while this shouldn't be a big hurdle it does have the potential to break things...just my 2 cents.
Last thing....... look around the forum here there are several threads of people doing what your wanting to accomplish, lots of good info already exists that can give you some insight, there are many resources on the net also but one of the best is this link.
Read the whole series from start to finish and you will come away with a good understanding of what you have to do on the host side of things to make this work, if your not intimidated after reading this plan your hardware out and get started....lol
If you have any questions just ask, myself and others will be happy to try to help you.
@Blanger, I'm going to piggyback on you for this sort of topic again. @zyfer, I'll just add my two cents onto @Blanger's thoughts because they nailed just about all of the important stuff, and I'll also provide a few other links that I found useful when I did this project a few months back.
If there's anything else I can help with, just ask!
This. I did this project on an existing rig that wasn't designed for it, and while it works, I really wish I'd planned ahead better, and I am for my next one I'm building in a few months. But my current one has just a 4 core i5, and sharing that to a VM isn't ideal, although I'm rather impressed with just how well it actually does sometimes.
I went with AMD myself after reading about the problems with Nvidia GPUs, and my R9 390 worked perfectly, so a +1 for AMD GPUs on this sort of project.
My current drive setup has an SSD for the host, and one for the guest, and then a large data HDD partitioned between them. It's not ideal and I plan to get all dedicated drives for the next rig. Like @Blanger said, you'll have problems if both the host and the guest are both trying to use the drive at once. Data drives are cheap, and SSDs are getting cheaper all the time, its definitely worth the cost to get separate drives for host and guest.
Synergy is the solution I use for my system, including during gaming. I don't play competitive online games where the tiniest latency can make or break the game, but I've had no major issues with it. It's a bit fiddly to get set up right, but once you do, you can just about forget its there. If you're going to use it in games though, make sure you lock your mouse to your VM screen(s) before starting, otherwise almost all games will get confused by the mouse input and mouse movements will cause your game to have a seizure.
If you choose to go with openSUSE, I may be able to provide some help with this project, just like @Blanger will if you choose Fedora. I used openSUSE and if you're inexperienced with Linux, it will allow you to set up just about this entire project with GUI controls, which can be seen as a good or bad thing, depending on how you look at it. But it does have a few quirks I had to overcome for my system, so I'd be happy to provide any help I can if you choose to do this project.
And finally, a few other links I used: First, here's the VFIO blog @Blanger listed, which was invaluable. There are four parts to the main tutorial, so be sure not to overlook the others.
Keep in mind, some of these posts aren't very new, so things may have changed. When I did this project, I sort of combined information from all of these links, plus a few others.
@Blanger, we've discussed this a bit on another thread, but as much as I know AMDs would be easier for my next GPU passthrough build, if Nvidia has the better GPU for my price point when it comes time to order, I might go with them, so I'd be interested to know what the work around is, and how involved it is...might influence my decision.
I will tell you that I switched to Linux as my daily driver just a few months before doing my first pass through, it really for me wasn't a struggle since I was around during the DOS days (prior to GUI's) and wasn't uncomfortable with using a CLI, it's just the nomenclature or commands that you have to learn then find a editor that your happy with.
A lot of Windows users think that using a CLI is difficult and it use to be harder but Linux has built in safeguards to a point that at least it tries to help you not wreck the system, it's still easy to screw up but not as easy as it use to be. It took me awhile to wrap my head around what I was trying to do with the pass through as far as "blacklisting" hardware, but once I grasped what was happening it became easier to add items to the grub configuration and get them out of the host's control.....that is the really big hurdle, at least it was for me.
I guess the biggest issue for Windows users wanting to switch is that Linux isn't Windows....it's very different, in a lot of ways better because of the control/freedom it gives users, but in a lot of other ways it can really suck if you have software that you must have access to that isn't supported by Linux and for me that was Adobe Photoshop and Illustrator, what ever I did I had to find a way to run it or I was going to be stuck in the MS environment for ever, and yes it is what prompted me to build a pass through system and yes I'm still using Windows but it is containerized and under my control not the other way around.
Easier....yes, better? not really sure, we switched to AMD/ATI GPUs a lot of years ago because of driver issues we had in some production software, so I was already in the AMD camp, to me it's just a matter of choice, neither has 100% fail-safe drivers when it comes to Linux or Windows so it's a crap-shoot driver to driver which is going to perform or which is going to break stuff that worked with the previous driver version.
To use a Nvidia GPU in pass through you basically do all the same as you would with a AMD GPU but you add the line -cpu host,kvm=off or options kvm allow_unsafe_assigned_interrupts=1 It's a little different from distro to distro and I tried it on Fedora when I was trying to get a GTX550Ti to work and didn't have any success, I'm sure enough people have been successful at doing it that there should be info that is distro-specfic on the various WiKis.
The performance hit is as you can see telling the host that the KVM is off but it's not so some allocations are going to be limited but it's all on the software side of things so the hit is minimal a few percentage points I've been told but I round up for the sake of discussions to 5%...really no one can say on any given system what the performance hit would be only in theory or a guess, but if your giving the guest enough resources a hit in the host system or guest of a couple percent should not be noticeable.
It really made my brain hurt...trying to get it to work with Nvidia, then add to the fact that it was a problem that they created on purpose, it really didn't take much convincing to just use a AMD GPU, I'm not a big-time gamer and if I get 30-60FPS @1080P I'm happy as long as the game runs smoothly which most of my stuff does, I don't even update the AMD drivers in Windows until I'm pretty far behind or have some sort of issue, if it works I try to just leave it alone..lol
I recall seeing this line now on some of the forums and blog posts I read on, and I couldn't figure out for the life of me what it was for. That makes sense now, since some of them were indeed using Nvidia GPUs. I switched to AMD on my last GPU and so far it's treated me really well, so I'd be more than happy to stick with them. All I'm saying is that if I want to spend X dollars on GPU for my next build and Nvidia's GPU at X dollar price point out-performs AMD's, I might be tempted to get it instead. But we'll have to see how things go.
This is what really burns me, when Nvidia pulls BS like that. It's that kind of thing that makes me want to stick with AMD, even if Nvidia's card WAS slightly better at the same price point. We'll just see how things look when I do my new build. It's a few months off, probably around the time I graduate college in the spring I am thinking.
Yup....it's enough motivation for me to just say screw them, I don't really care if my GPU performs 10% less in a given game because the game was optimized for Nvidia, it's not a mission critical thing gaming that is so I can suffer some loss in performance or resolution, or FPS and smile knowing that I'm not helping them screw the next guy who wants to use the hardware they bought in a unconventional manner as far as the manufacturer is concern it's not like your trying to re-engineer their intellectual property your using the device they sell in the manner it was intended to be used, just in a virtual environment.
To me it's like using the power of a bunch of Nvidia GPU's as Bitcoin miners......I'd bet they wouldn't like that usage either...lol
i have a 770 that seems to work in passthrough, but has micro stutter issues in all ue3 games i played.(fps in the 70's but frametimes would jump all over the place making it feel terrible) to be fair the only ue3 games i tried were smite and bl2.
@Blanger ever experience anything like that on amd cards?..Any amd gpus that work particularly well? happen to know if those 470/480s work better than the fury cards? might have to bookmark this for a future build myself xD
I guess to answer your question I should tell you I'm running older R9 270 cards which a lot of people consider to be out dated or the bare minimum in modern performance...lol
Having said that the only real issues I've had micro-stuttering in is Witcher2 which really seems to push my setup, just about everything else I've played has been fine and playable....maybe not on the highest settings but on normal settings graphics-wise I have no issues (I should also mention that in my guest I'm running two 270s in crossfire which has seemed to help)
I do have FPS issues in some games like FarCry3, the FPS do move around a lot between mid 30's to high 60s, I do see some 100fps spikes but not many, I do find it odd that there is a large spread in the FPS but then again I'm not a big time gamer so as long as the game is playable I'm pretty happy.
As to the 470/480s I have no experience so can't advise you, although I do believe they would perform just fine in a pass through situation, no real reason they would not, in my world I'm more concern with providing the KVM with as many resources as I can ie CPU cores, memory, I/O then I am concern with graphic performance, I have not found a game that would not run but like I said I tend to start off on normal settings and tweak the graphic up from there, sometimes I just leave it at normal settings or let the game choose as some games are pretty good at choosing during setup/install.
Honestly I'm as interested as you are about other GPUs and how well they function, once Zen is out next year and if it is everything they say and has a decent price point compared to Intel I may look at building a new box and of course I'd want a newer GPU for that box....
thanks mate, i remember linus was having issues with the fury/nano having issues with resets (though the performace seemed good); was just wondering if that was cleared up with the rx stuffs.
Don't mean to hijack the thread or anything, but i do have another question. what settings do you use to give the guest storage? I usually let it write to /dev/sdx directly(this is weird because selinux flips sht and grub adds the vm as a boot option xD), but stuffs like using the host cache and the actual controller it emulates ive never fiddled with. Most of the people that do this dont commit to it; usually just using the image file storage which isnt the best in terms of performance.
Actually I use of course a separate drive that has been setup for use by the host system, it will appear in the QEMU/Vert-manager KVM configuration where you choose the space for the KVM to use, in the image below it will appear when you hit the browse button.
Then I just allocate the entire drive space to the KVM as in this screen below.
As to the actual path that it is using honestly I've never looked and being at work right now I can't look to see, but the drive does of course show up in the Linux file manager and I can access it from there, but the guest shows it as the full drive that I have allocated with whatever free space the drive has available.
Like you I just accept that QEMU/vert-manager knows what it is doing and roll with it...lol I do not see the KVM as a boot option in grub, I pretty well figured that it was that way because the KVM can't run without the host running under it, but it's interesting that you see it as a boot option.
I did at first use the 40g that it wants to allocate for a VM but once I got a working pass through and a good install of Win 7, I started adding programs and 40g just wasn't even going to come close to what I was going to need, Windows doesn't know it's in a virtual environment and does all the things it does on a bare metal install like a swap file and space allocation, heck 40g with just Windows, a few updates to the OS and a couple games/programs and it will be hitting limits, it's ok to test but not to use as a working OS.
It's why I tell people to look at the specs that the OS and programs need to run on a bare metal install, you need to give that much to the virtual environment for it to have any kind of chance, you would never install Windows on a dual core CPU with 4 gig of ram and a 40g hd and expect it to do very much? and this is what a lot of people try to give the KVM as virtual hardware and come away saying it sucked...of course it did and it would on bare metal also...lol
I knew there was something else that had to be done to run a Nvidia GPU and I just found it again....
"The GeForce card is nearly as easy, but we first need to work around some of the roadblocks Nvidia has put in place to prevent you from using the hardware you've purchased in the way that you desire (and by my reading conforms to the EULA for their software, but IANAL). For this step we again need to run virsh edit on the VM. Within the section, remove everything between the tags, including the tags themselves. In their place add the following tags:
Additionally, within the tag, find the timer named hypervclock, remove the line containing this tag completely. Save and exit the edit session."
I had forgotten about doing this till I just read it again....lol
After a day or 2 of not seeing replies or anything I kinda figured this was a no go thread. Checked my email just a bit ago and seen some messages and came back and YaY! new replies. I've already read them. I've been doing research here and there along with other misc things.
I have read the post over here where you talked and helped a guy over here:
Which while similiar is a bit different.
I have also tried this a while back but found out my motherboard was trash. Space shouldn't be a problem tho since I have an Obsidian 900D. I am pretty much a noob as far as Linux goes. I have done a few things like getting Windows to see files on a Linux PC and playing around with a few programs but I haven't spently nearly the time with Linux as I have with Windows.
This kind of led me to check prices on ebay which appears even cheaper. I do know this is a v2 version which seems to be the only one at 2.6GHz. The performance seems to be about the samish as the i7 processor I was looking at earlier.
I would like to know how hot Xeon processors get? I haven't found anything about it..
I am thinking that getting a dual socket motherboard and 2 of these processors might be the way to go tho seem to be having problems with the motherboard side of things.
I thought socket LGA 2011-3 was the same as LGA 2011 only that it allowed higher voltages. It looks like I might be wrong since I have seen some say that a LGA 2011 CPU can't fit into a LGA 2011-3 Socket.
So assuming that LGA 2011 can only work with LGA 2011 socket board I found this board ASRock EP2C602 dual socket LGA 2011 Serverboard: http://www.newegg.com/Product/Product.aspx?Item=N82E16813157352 I am currently looking at and have been for a few days but I'm still a bit iffy on it. It's the same one in the 16 Core / 32 Thread article. It's so far the cheapest. The Chipset itself is C602 which appears to support Virtualization according to
Altho I'm feeling a bit more comfortable with this board in mind I'm still a bit hesitant because the chipset appears to say it supports PCIe 2.0 while the Newegg page says PCIe 3.0. Also I don't really see where it mentions it supports virtualization. I'm also a bit confused on the RAM too. It talks about DDR3 which I understand. It talks about ECC which I understand... but I see this "R/LR ECC and UDIMM" and I'm a bit lost.
It also doesn't come with audio at all. Which i can kinda understand since it's a server motherboard but then does that mean I'd have to get 2 audio cards? I'm not even sure what to look for in audio cards at all or how I would get 1 speaker on 2 cards.
As for a graphics cards... I'm thinking of picking up an AMD Fury card.. and having that pass thru while my GTX760 runs the Linux environment. A second monitor tho is a no go since I barely have room on my desk for my current one. It looked like in one of the videos a while back that Logan was running a Windows VM in seamless mode. Is that still possible?
Also I read somewhere that the graphics card is suppose to support virtualization too.. which seems confusing. As I understand it AMD supports this in general with the last few generations of their cards but is there a way to be certain about it?
The storage I don't think I'll have much of a problem with. I will pick up an extra SSD for this and might just do a similiar set up and have the Windows VM connect to the Linux guest via share folders / SMB / Samba thingie.
the nvidia workarounds with virsh is if you're using libvirt/virt-manager, and the first ones you listed are more if you're calling qemu via cmd and not using libvirt. Though now that libvirt has support I dunno why anyone wouldn't use virt-manager.
Well it's a 150+w CPU so it's going to require a good aftermarket cooling solution, it should run about the same as my 8370 though which is 125w and I'm using a Corsair H100i that works just fine.
While that sounds really cool, you want to make sure the MB has everything you need, most dual socket MBs are "server" class motherboards and some have limited I/O as in USB, SATA, etc, I'd be interested in seeing what you find because that setup would make one killer KVM pass through machine....lol
That MB would be a great choice, Asrock has become one of my favorite manufactures offering lots more bang for the buck and a very good price point.
Yeah.....you could use USB based sound cards or one PCI for the host and a USB for the guest, I use head phones when I play games so the USB sound card was a easy fix for me, they do make switches that will let you input two sources to a set of speakers and switch between the sources....
Nope.....you will need a second monitor, while you can run Windows in a VM using something like VirtualBox and share the same screen you can not with a KVM and pass through....well I say that but sure you could use a KVM (keyboard, video, mouse) switch to switch the inputs between two GPUs to one monitor, it can be done but you really will find that you'll want access to the host screen for one reason or another, a monitor stand, tree, or anything that will increase your area will work, I have 4 monitors on a stand which is 2 27" on the bottom and 2 23-24" on the top if I look at the desk space there isn't enough room for the two 27" monitors to sit side by side but hanging in the air there is room for all 4 monitors, of course I have to tilt my head back to see the upper screens but I don't use them for gaming just viewing stuff.
Nope.....not even an option, anything that will work on bare metal will work in a pass through, it is in theory the same thing since it's physical hardware you are passing through not virtual hardware.
the fury/furyx/nano cards have (or used to have not sure if its fixed) a problem where you couldnt reboot the guest without having to reboot the host. Not that big of a problem once you get everything setup, but can be an annoyance.
as far as seemless, not with a gpu passed though.(that i know of) Steam streaming can work, though.