Dual Booting, Questions and Concerns about Linux and Dual Booting

Hey guys,

This is my first post on here and I want to preface this by saying that I am a completely new user and have no previous knowledge with regards to Linux or Dual Booting. I have searched and researched the topics, and as such I think I understand at least a bit of the material but I want to posit some questions and concerns before I continue. 

I suppose before I start explaining my questions and concerns I should tell you what I'd like to do, and what my system is:

I'm interested in learning Linux. Whether that is through, Arch, Debian, Ubuntu, Mint, or another distro (though I'm leaning more towards something that will actually allow me to learn what's going on in the background etc.). So Whatever distribution I decide on, I would like it to be somewhat demanding with regards to problem solving, but with enough community support that it is not impossible to find solutions should I completely dead-end on my own. In other words, Ubuntu is probably out, as is Mint, but as I'm writing this, I'm willing to be persuaded. Present to me your arguments, if you're feeling up to it, and if you have a logical merit to your argument, I will likely consider it. That goes for any and all distributions of Linux, just please keep in mind that I know very little, if anything at all about Linux as an operating system (Though I am here to learn!) and so you'll need to explain your arguments if you're going to convince me.

My system as it stands, is:

  • 128GB SSD (currently occupied by windows 7 Professional with approximately 50GB available)
  • 3TB HDD (2TB of Space actually usable, 1TB of this is free space.)
  • Intel i7-3820 
  • Rampage IV Extreme
  • 16GB Patriot Viper 3 1600mhz RAM
  • Corsair H100
  • Corsair HX850 PSU
  • EVGA GTX 670
  • Fractal Define R4
  • Philips 23.6" LCD Monitor

I know much of that is irrelevant, but I want to keep in mind that I will always need drivers to run this hardware on whichever distribution I decide on. I am completely uncertain as to which distribution has the best community support/drivers, or whether the distributions are close enough (as they are Linux based) that drivers can be ported across. Any information on this subject would also be greatly appreciated. 

My goal with this system, is to use Windows for school (which I have been doing and will continue to do) as well as gaming (while I'm willing to game on Linux, I understand that as of yet, not all of the games I would like to play are supported on Linux. Too bad really.). I plan on using Linux to learn Linux, and get myself prepped for any careers I may decide on that require me to at least be somewhat well versed in Linux (I'm planning on going to school for Technical Programming starting in September 2014) I live in Vancouver, BC, Canada and as such I'm not sure if knowledge regarding which OS's are used in the business will be relevant across international lines, but if you have any advice I'd love to hear it regardless, just keeping in mind, and pointing out difference of location.

 

So, the Questions and Concerns:

 

Wubi, what is it, why should I use it, or not use it? 

Dual booting to an SSD? What are the things I need to be concerned about with this?

How should I partition the SSD if this is a viable option? What are the advantages or disadvantages of partitioning an SSD? Are there issues with partitioning an SSD more-so or less-so than partitioning an HDD?

Accessing files on HDD from a Dual-boot system. I have a storage drive, as you probably gathered and I would like to use this as communal storage. Can I do that without partitioning it? Does Linux access the same file-types as Windows or do I need to partition my HDD as well to use it for Linux?

I don't understand the separate parts or partitions that people recommend for a Linux system. I often see these listed: / (I assume this is root, which is where the main stuff happens, but I'd like a little more explanation on this) /home (No idea what this one is for) and /Swap (I think I know what this is, similar to a cache used on an HDD, but I have no idea how it functions, why it's important, or whether it's necessary for Linux, though I understand that there is a debate regarding that). That being said, I have no idea what they do, or why they're recommended, or if there are other partitions recommended.

When installing Linux (I plan on doing this through a bootable USB) What is the best approach (I suppose depending on distribution) to managing the installation so it is the most efficient install possible (uses only the space it needs on the SSD, with ample room for storage on my storage drive)?

Bootloaders, I know Windows bootloader doesn't recognize Linux (or Mac OSX I think?) and I've heard about GRUB but I have no idea what it is, what it does, how it works, whether it's a separate or included install depending on distributions. I also have no idea where to start with bootloaders in terms of understanding how they work.

Would it be easier, to load Linux onto a new, separate hard drive (say a 500GB Western Digital or w/e HDD), or would that complicate things more? Should I create a second system within my system? (a second SSD and a second HDD for storage?) Could I put Linux on a Separate SSD (say a 64GB or w/e one) and use my HDD for storage (I suppose that's a pretty similar question to can I use the HDD as communal storage)? 

How many commands should I know before even attempting to Install Linux? I know arch is largely text-based and is entirely text-based during the install (I watched a guide, didn't understand it at all though I plan on taking notes the next time I watch it, hopefully with a bit more knowledge under my belt.) and it just seems intimidating, because sure I could use Archwiki and follow a bunch of step-by-step instructions, but I wouldn't know exactly what I was doing and I find doing step-by-step instruction based stuff to be completely frustrating when it comes to troubleshooting, because troubleshooting requires you to have some knowledge of what's going on.

Do I need to know how to program before getting into Linux? Should I know a bit of programming? I have a bit of knowledge when it comes to programming, mostly C++ and HTML/CSS but my knowledge is minimal at this point (hence why I'm going to go do a diploma on the subject). I get programming, it's not that hard, I just haven't had time to really delve into it yet (I understand it gets harder as you reach more complex problems, but I have a degree in Philosophy from the University of Victoria, and my degree was largely based around Symbolic and Theoretical/Analytic Logic which makes understanding statements in programming quite simple). If I need to learn some more programming, recommendations on languages/sources of information?

Optimization? Once I install Linux, whichever distribution you recommend, how do I optimize it so my fans aren't always running at full etc. i.e. How do I make it run smoothly?

Desktop Environments, What are they, Which ones do you recommend, how do you change them out on different distributions?

Log-in Screens? How does this work on Linux? What are they, are there different options available, and how are they implemented?

Kernel (The big topic) I understand that every OS has a kernel, every application also has a kernel. I don't know what a kernel is, I don't know what it does, how it works, why it's important, or what it's used for, but I've looked at Linux From Scratch, etc. and it requires you to do work on the Kernel from what I understand and that sounds absolutely terrifying to me, which is why I'm curious about it. I almost want to dive into this just because I have no idea what it is and I think it would really help me learn as much as I possibly could about the subject. That being said, I feel that if this is an integral part to all operating systems, I don't want to fuck anything up. (pardon my french.) As such, looking for information, what is the Kernel? What does it do? How does it work? Why is it so integral to the OS?

What programs can I expect to be missing when I move over to Linux?

What programs can I port over somehow? (How does this work, also?)

 

Those are my main questions and concerns, if I think of any more, I'll be sure to edit them in.

 

Thanks in advance for all your help. Resources are completely welcome, links to other forum posts if I've missed something somehow are also more than welcome. (No search function in the Forums? I ended up using ctrl + F but I mean an integrated search system would be nice since according to the forum rules we're supposed to search for our topic before posting on it. If I completely missed this, please feel free to screenshot/circle it and prove me wrong!)

 

Cheers,

 

Crumpled 

Okay first thing I'll say is get a second ssd so that your os's are separate partitioning that hdd will be a pain otherwise and if it breaks you'll lose both systems

To learn I will recommend to use virtual machines like VMware Player, you can install linux and use linux while been in your windows 7 pc. you can have as many virtual machines you want, even you can try dual booting there.

Best distro to start with : ubuntu

What you cant do with linux , watch blurays with a free app, :( 

Wine app will let you use some windows apps in linux environment.

Definition of kernel from wiki : the kernel is a computer program that manages input/output requests from software and translates them into data processing instructions for the central processing unit and other electronic components of a computer.

You don't need to do anything with the kernel, just update it to the new versions when available simple command, or just a simple update.

no programmig required to learn linux :)

Tutorial how to install Ubuntu with WMware player : youtube 

Good luck

Wubi, what is it, why should I use it, or not use it?

Wubi is a program that enables you to install Ubuntu in Windows. It will give you the option to boot Ubuntu when you start the computer. It's not the most viable way to install Linux. 

Dual booting to an SSD? What are the things I need to be concerned about with this?

 I would use separate hard drives for the different OS's.

Accessing files on HDD from a Dual-boot system.

You can access the files on an HDD from Linux. No problems there. 

How many commands should I know before even attempting to Install Linux?

None. I don't think it matters. If you have a problem, you will almost always find the solution including the terminal commands. Here are a few basic ones though. Sudo apt-get install (name of program), Sudo apt-get update, etc. If you are using Ubuntu, then alt+f2 will also be useful to you. You can run a command from the HUD. Typing GKsudo gedit will open up a text editor allowing you to edit system info on your computer. 

Do I need to know how to program before getting into Linux? Should I know a bit of programming?

No. I don't know any.

Optimization? Once I install Linux, whichever distribution you recommend, how do I optimize it so my fans aren't always running at full etc. i.e. How do I make it run smoothly?

You shouldn't have to worry about it. I've never had to do that on several machines.

Desktop Environments, What are they, Which ones do you recommend, how do you change them out on different distributions?

Gnome, Ubuntu, Zorin, Mint, etc are all desktop environments. All of those are Ubuntu/Debian based distros. I recommend either Ubuntu or Zorin, I'm using Ubuntu 13.10 right now. 

Kernel

www.google.com

What programs can I expect to be missing when I move over to Linux?

Not missing, but replaced with open source programs. For example, Word is replaced by Libre Office Writer. There are tons of programs to choose from and modify how you see fit.

What programs can I port over somehow? (How does this work, also?)

You can run a whole bunch of Windows programs using Wine/PlayonLinux. I installed CafeSuite in Ubuntu using PlayonLinux and it ran great. 

Let me know if you have any more questions.

I do not agree with running linux in a VM. The other way around makes so much more sense: running Windows in a VM, your system is ideal for that (2011 socket, has VT-x and VT-d, lots of RAM, lots of storage).

I also do not agree with using WMplayer, it's not entirely open source and will taint your kernel. Also, it doesn't work well with nVidia proprietary drivers, which will also taint your kernel, but with an nVidia card you need those if you want to game under linux, because nVidia boycotts open source drivers, and although the proprietary nVidia drivers are a pain in the butt in linux, because you need patches to make them compile on anywhere near modern kernels, it's a relatively easy fix. If there is one thing I would change about the system, it's adding (yes, adding, not replacing) an AMD GPU card (not an expensive one, something like a 7770 or 7790 will do fine, there are probably very good deals on 7850 or 7870 card out there if 150 USD is still OK), so that the 670 can be bound to the Windows virtual machine and you can game with performance gain in Windows, and use the AMD card with open source drivers for linux gaming.

Don't go for Ubuntu, in fact, avoid anything Canonical like the black plague, it's nothing but trouble. That also goes for Ubuntu derivatives like Mint (not Mint on Debian). Try to get one of the major distros instead (the major community distros are: Fedora/CentOS, OpenSuSE, Mageia/ROSA, Arch, Slackware, Gentoo and Debian) or a direct derivative of one of those that is partially upstream and downstream at the same time, like Manjaro for instance is to Arch. If you want to enjoy the latest and greatest features, get a leading edge distro (the release version uses a very recent kernel and actually works with it) or even a bleeding edge distro (the testing repos provide everything you need to run the latest kernel in a stable fashion, the bleeding edge distros are in order of bleeding edgeness: 1. Fedora, 2. Arch, 3. OpenSuSE Tumbleweed, whereby Fedora Rawhide is the bleeding edge of the bleeding edge). Fedora is the upstream of Red Hat Enterprise Linux (RHEL) and it's derivatives, the leading distros used in enterprises, government, military and research, OpenSuSE is the upstream of SuSE Enterprise Linux (SLES), which is the second most used distro in enterprises etc..., and ROSA is the upstream of ROSA Labs Enterprise Linux, which is the third most used linux distro in the enterprise world, and together with Mageia the successor to the Mandrake heritage. Debian is also used on a lot of servers, but it's also the least recent major distro.

Part of the linux experience is the ability to distrohop, try out stuff and change distros until you find what you like, and until you know how to use the different methods and package managers used by the different major distros. I would recommend keeping up-to-date on Fedora (that is the most important distro if you want to go for a career in linux), OpenSuSE (very nice intermediate distro to start out with, I would recommend this as first distro for someone that wants to work in linux later, because it's very easy to use, very stable, has the advantage of Yast and 1-click-Yast-install, and has by far the best documentation in manual-form of any distro), and Arch (I would recommend Manjaro, it's Arch based but offers some great solutions for typical Arch-problems and has the most helpful and accessible support and community of any distro out there).

The kernel is the actual Linux product. It used to be governed by Linux Torvalds, and is now governed and maintained by the Linux Foundation. The kernel is basically a UNIX clone, originally written for x86-machines, but now spanning almost all architectures known to mankind. It's the basic system functionality and the open source drivers of practically all hardware known to mankind bundled together in a small package. That package is constantly updated with new drivers for new hardware and new features as they come out. The kernel is constantly (and very rapidly) evolving, and therefore, linux is always in motion, it's a never-ending project. The kernel is not an operating system, it's just a kernel, so communities and companies build operating systems with that kernel. These are called distributions, or short distros, and are also referred to as GNU/Linux operating systems. Generally, when you see "linux" without capital L, a GNU/Linux operating system is referred to in a generic way (without referring to a specific distro, just the universe of all distros), and when you see "Linux" with a capital L, the Linux kernel is referred to.

GNU/Linux distros are very well optimized as they are, some are optimized towards other use case scenarios, but the differences in visible performance between the major distros are small for the same kernel version. Older kernels usually perform less, and have less features, which is to be expected, and there may be differences in Desktop Environments, but as a whole, the difference is very small. Some distros are generally marked as "faster" than others, because they are "minimal" distros, where user comfort is sacrificed for a smaller footprint, that delivers a bit more overall performance. Arch, Gentoo, Slackware and Fedora are different types of minimal distros. Mageia/ROSA and Debian are less minimal distros, but still not "bloated", while OpenSuSE is a pretty full-featured distro, but by no means bloated or slow. Typical bloated distros are things like Ubuntu with Unity, which is ridiculously bloated, or Mint+Cinnamon, or a number of respin distros by smaller communities (Elementary OS, Zorin OS, etc...).

What can you do with linux that you can't do with Windows is a more relevant question, but it's so much that it's not possible to answer it... just imagine that you can actually use the hardware you paid for, instead of the sterile clickfest of a Windows software console. There is an incredible wealth of open source software to use on linux, and most software is a hell of a lot more potent than closed source console applications. Linux just work in a different way, once you flip the switch to thinking open source and linux, you don't go back, and you don't miss anything, in fact, once you are experienced in open source, you'll really see closed source as the bloated, dysfunctional mess it is, and it will make you puke just thinking of it.

What about the Windows console games, well, I think it's best to run them in a Windows virtual machine with hardware passthrough for two reasons: 1. it's safer than to install closed source software on your linux host, 2. the performance is better than running the same software on a Windows bare metal install, which is pretty much out of the question for security reasons anyway.

You can install Windows in a kvm box (open source), and with virt-manager, you can easily bind the PCI slot that has your GeForce card to the Windows virtual machine. If you do it that way, the linux host won't look at that GPU card, and will use another card (hence my advice to add an AMD card, more on that later) for linux. You can then switch between the screen of the Windows guest and the linux host on your monitor (by using the input selector), and use the Windows GeForce drivers in Windows, with direct access to the GPU card. For a number of reasons (which have been explained before on the forum), Windows will perform better in the kvm container than on a bare metal install, so you'll get a nice boost in gaming performance, and the added benefit of running Windows behind the NAT-firewall of your linux host, which solves most of the security problems of Windows. Another advantage is that you can easily "snapshot" your Windows box from within linux, so if something breaks again in Windows, you can literally reset it to working condition in a matter of seconds. By going for an AMD card for your linux host, you have the added benefit of being able to use an open source driver that performs pretty well, and the added benefit of using the vastly more powerful compute acceleration of AMD cards in OpenCL optimized applications.

As to the partitioning, that's very simple: unlike Windows, linux does have high performance, non-fragmenting, reliable filesystems, that can handle more than 2 GB, so you can simply use your SSD for root and swap, and configure your 3 TB HDD as home partition. I would recommend using LVM for the home partition, as it will allow you to make your home partition "portable" between linux distros, and you can add storage without repartitioning. You also have to add the "discard noatime" parameters to your root partition to optimize the SSD performance. SSD's just fly in linux, because of the much more modern filesystems, the performance is much higher than in Windows. On an 120 GB SSD, you can easily pack several linux installs, all configured to use the same home partition on the HDD, so that you can run many linux installs at the same time.

Linux is a big development environment, you don't have to know any CLI commands or have any programming skills, but linux will invite you to learn them.

Modern linux distros use GRUB2 instead of GRUB as bootloader. Modern distros also allow you to either put the payload of the bootloader in the MBR, which would then be a separate 500 MB partition on your storage, or to keep it in the root partition. The difference is that when you're running multiple linux distros at the same time, you can use different bootloaders for different functionality if you don't write them to the MBR. If you install Windows on bare metal, you lose that option. If you have Windows on your system now, it's recommended to make a new MBR on all your storage before installing linux, to remove the Microsoft malware from your MBRs. It's also recommended to replace the SLIC tables Windows versions after Windows 7 SP1 have written to your BIOS, because they contain tracking functionality. Linux does not use SLIC tables, it does not use any payload written to BIOS, and it provides tools to load generic payloads to the BIOS to overwrite malicious Microsoft payloads, which are most obvious on UEFI boards.

Wubi is a contraption from hell, it's a quick way to fuck up your system, don't use it!

Desktop environments are a matter of preference. Unlike Windows, where the GUI of the operating system is part of the core system, which exposes the entire system to security flaws, in linux, system shells, or desktop environments, use a display server to interact with the system, and run as applications. The most used DE's are Gnome, KDE and XFCE. All of them have a number of "native" applications that integrate in the DE, have groupware integration, etc... XFCE and KDE are more customizable, Gnome is like OSX, the focus is on highest degree of integrated functionality and ease of use, KDE and XFCE are whatever you want them to be, with XFCE starting out more like Gnome2 used to be (that is now MATE), and KDE is what Vista tried to be but could never accomplish. You can install any DE or WM on any distro, there are a lot more DE's and WM's ou there, just try out different configurations, there are no two linux users in the world that have the same configuration, it's not like Windows (or Ubuntu+Unity or SteamOS in the shame corner of the linux world), you don't have to use what is prescribed to you, the sky is the limit, you can do whatever you want, you can make your computer work for you instead of you working for your computer.

Linux has a huge portfolio of tools for hardware integration, for cooling systems for instance, you can program just about anything. Popular are fan controllers that have their own microcontroller that is programmed from within linux. Most dedicated cooling microcontrollers come with easy-to-use native linux tools that allow simple GUI configuration, and they are cheap. However, most systems don't require additional controllers, most modern boards have enough headers for fan control that can easily be configured in the board's BIOS. But you can if you want to.

Look through the forum for further information, most of your questions have been dealt with. And then it's just a matter of freeing your mind of console software, flipping the switch, and having fun. Go onto irc channels and forums of distros and communities, open source devs are directly accessible via those channels, and the open source community is generally very friendly and helpful. Just jump into the deep and discover a world of wonders!

 

I hope my answers weren't crap.

Hey Guys, I just wanted to say thanks for the responses! I haven't had a chance to read through all of them yet, and it looks like I'm going to need to do some external research/reading before I can respond with confidence. As such, I'll respond when I can, and I do really appreciate the help, this has definitely given me a great first-stepping stone to work with!

 

Cheers!

Sorry to kind of hijack the thread, but I am in mostly the same boat as the OP (3820 with 16 GB ram) and this is directed to Zoltan. I have been really eyeing up the thought of just running windows in a VM under linux since you suggested it in a post a week or so ago. My question is mostly about where I can learn about specifics regarding the setup. Specifically I have 2 7950s running in crossfire and 4 drives (1 128 GB SSD, 1 256 GB SSD and 2 3TB HDDs) and I can curious as to the best way to implement that setup. For instance I plan on just converting the larger SSD to a pure linux install ( probably starting out with antegros). How would the virtual disk space work for the windows VM. Ideally I would like to run it off of the 128 GB SSD, but additionally I would like at least 1 of the HDDs to be able to used by windows, mostly for steam storage. Can this be set a another virtual disk (that is a specific space designated on this drive for the windows VM to use). The two disks (HDDs) are currently formatted as NTFS is this makes any difference in the VMs ability to mount and use them. Also is there a robust way to be able to keep these disks as NTFS and use them under linux for storage (native read/write with little concern about data corruption) orr should I plan on immediately swapping the file systems? On the graphics card side my main question is about the ability to use both cards in each environment. I take it that the cards should both be able to be used under the windows VM, is this correct? Also are there any issues or concerns with crossfire setups under linux (I can see how is the VM system is robust enough I would not really care about being able to use both cards to game in linux, but are there issues with power management, like the ability to be able to completely turn off one card when not running with a crossfire profile). Like I said before I am really interested in jumping straight into linux, just want to try and work out the specifics of what I should anticipate in the way of running the windows VM first. 

Give this a read...

https://teksyndicate.com/forum/linux/what-if-i-want-everything/157480

On topic: Fedora and Opensuse have the greatest money making potential and are rather user freindly. Gentoo, Slackware, Arch will teach you the most about Linux and Operating systems in general.  Ubuntu based may be bloated but they the easy button approach.

On installing: If you choose to dual boot over visualization make sure you install windows 1st or just get a cheap 32 or 60 GB SSD since linux dosen't require much space. Windows will screw up your boot loader if you don't install it 1st.

Off topic: Zoltan, I like what you are saying. Do you have benchmarks for Windows on KVM vs Xen vs Bare Metal vs etc..? Because you made me excited.

My Next build I plan on doing will be FOSS Arch or Gentoo with VMed Windows (Games) and Another Distro with proprietary software (games and web browsing). 

Also has AMD got VGA pass-through working yet on their CPUs?

The performance depends on a lot of factors. Kvm is evolving, if you're running a bleeding edge distro, kvm is definitely the best solution, for it has the most performance benefit potential, but there is variance, sometimes a new kernel comes out and you'll lose like 2 fps again, then gain 3-5 fps with the next kernel, etc... If you're not running a bleeding edge distro, commercial solutions usually perform better, either Xen (which performs really well) or ESXi (which is pretty common in enterprise applications, it's been stable for a really long time), and the benefit of these is that you can paravirtualize pretty decent graphics drivers even without passthrough, enough for instance for Adobe stuff or light gaming. There are just so much possibilities within the realm of virtualization. I'm not a gamer, nor do I spend that much time tinkering with computers every day, so I don't experiment much. I do have a VGA passthrough with AMD cards in Fedora (with nVidia cards, there is no use, even if it could be done, which it can't yet, because the open source linux driver is so bad, that you'd sacrifice all performance in linux, because for VGA passthrough, you have to use open source drivers in the linux host, because when you bind/unbind the VGA card mid-session, which is what VGA-passthrough does, you can't use a driver that uses proprietary kernel modules, as your session will crash). VGA passthrough is pretty complicated and requires AMD graphics cards and a particular arrangement with regards to what card goes where and stuff, too much to explain, and probably, even with a perfect explanation, it will not work anyway because every system seems to react differently. VGA passthrough only works with open source virtualization solutions that require no proprietary kernel modules, so kvm/qemu is pretty much the best solution. PCI passthrough on the other hand just requires 2 graphics cards, either an iGPU and a discrete GPU, like an Intel consumer class processor or an AMD APU, or two discrete GPU's. When you bind a PCI slot (and the card that sits in it) to a guest appliance, it will not be available to the host system, but it's really easy to configure, and no CLI is needed at all. It works perfectly fine with kvm and with proprietary solutions, and works pretty fast on all kinds of systems with Xen. I know quite a few competitive gamers that started using hardware virtualization about 4 years ago to get that extra bit of performance for a competitive edge, and most of them use a PCI passthrough via kvm/qemu, but on a highly optimized system, with loads of performance tuning both in the host and the guest. They get stupid performance gains in some games. In hugely popular games like War Thunder, they multiply network and graphics performance by running Windows in a container. This game was released in open beta for the Russian market one full year before the world wide release, and a lot of Russian online gamers have been using virtualization to boost performance for years. It's really starting to spread across Europe now like wildfire, and it will become much more important once HSA sets in next year, because Windows can't do efficient HSA computing at all, you you need the linux host to help Windows use the system in an efficient way to play Windows games. You just get an instant performance boost because you can strip Windows bare, because you only need the parts that are necessary to run the game and the game's DRM spyware (as the real daily use functionality of the computer is not dependent on Windows anymore, linux is the primary OS), you can delete all the bloatware that slows down the system, and by running the Windows guest from behind the linux host's netfilter, you gain a lot of network performance (and safety). It works with crossfire (bridgeless), it doesn't always work with SLI (SLI requires the bridge, which screws up PCI management, and SLI requires proprietary drivers, which cannot be used on the host OS because they screw up the kernel). Older GeForce cards that work pretty well with the nouveau driver, can be used as host GPUs though without problem, things like the 9500GT/9800GTX/etc work very well for that.