Wendell: Should I just stop using VM's and use Docker instead?

Hello,

This is a question especially for Wendell as he is the most competent with these Linux related questions.

My question is:

Should I stop using virtual machine on my Linux box and only use Docker for my mildly secure needs ?

You know, when I do not need to be ultra-secure, I feel that just making a container for all my gaming stuff and web browser usage should be plenty enough right ?

Have you guy's any experience with Docker? Can it be used to run entire OS reliably and as close to the source as possible, for examples:  using the package manager and the packages as they were intended to...
I am especially talking about the Nvidia optimus technology (bumbelbee is the name of the program), can it be run in the environment and not in the host OS?

Is there some majors problems with Docker, why doesn't everyone uses it by now?

Whew, that's a whole bunch of questions, thank you in advance for your answers ;)

Right now, there isn't that much you can do with Docker or similar process virtualization and sandboxing services.

You can only use it well enough not to puke, if you use bleeding edge RPM distros, in particular Fedora Rawhide and OpenSuSE Factory, and the technology is not considered secure at this point in time, it's largely still beta state, and not all security bugs have been ironed out, so I would definitely not recommend it for a production environment. But it's clear that those that want part of the action, have to get on board right now, because this is big.

The new kernel 3.18 that's just been released adds overlay filesystems and KMS AMD drivers to the mix, and that's a good step forward, but I think that it won't be until AMD releases hybrid open source hardware based APU's, giving x86 access to the same open source hardware advantage as ARM devices, that the technology will really take off, because it will open the door to the necessary application support. In the first place, it will benefit Google enormously, because everyone will first use the technology through Chrome/Chromium, before Docker or other enterprise-grade implementations.

Docker and similar technologies are definitely the future though. I think this is going to take over the world by storm. I personally think that 18 months from now, people will have forgotten what computing was like before Docker.

i half agree with you about in 18 months from now etc etc etc. i think on the server side that will be true but consumer side i dont know that docker will really take off. it seems more like a server/enterprise usage more than anything

Well on the server side there is the service, but it's about application virtualization and sandboxing, and for the moment, the most advanced implementations of the technology, are sandstorm and chromium.

Chromium (and the derivative chrome for people on commercial software consoles that don't know how to compile their own browser) is used by everyone, and it's hugely successful. A lot of consumers use the Android app features in the Chromium browser, which run in a kernel-based application sandbox. People are already hooked to this technology, without even realizing.

Virtualization is arguably the most important innovation in the last decade. The specific application varies, the technology is the same. Of course, full featured applications are linux-only, it will be years before commercial software consoles catch up. Android devices will be running sandboxes with DirectX applications long before commercial software consoles will be running Android applications natively in a controlled and secure fashion...

By seeing your answers, Docker is not ready yet,...

But in the mean time is there a reliable way to sandbox an OS/application without loosing too much performance? ( I only know chroot, vm's and docker which all does not suit to what i want to do (yet)...) 

You don't lose much performance with kvm, in fact; you lose practically none, and that's full system virtualization with the possibility of passthrough, for instance of network devices or graphics cards.

A chroot-ish application like Docker or lxc doesn't have the same goal or application area as kvm/qemu or any of the more commercial type I or type I-ish hypervizors (Xen, VMWare, etc...) or any of the dozens of open source type I and type II hypervizors. You'll never be able to use Docker instead of a full virtual machine, because it serves a different purpose.

Docker is very much usable and ready for what it has to do in the enterprise world, which is the main market for Docker, but there is a world of extra possibilities for virtualization technology, some of which will be explored by Docker, some of which will only be explored by other projects.

The advantage of Docker is that it is well supported by the enterprise world. It's not the most comprehensive or the best performing project of its kind. That is just what often happens because of marketing and politics. You want kvm but you have to use VMWare or Xen because nobody wants to write tools for kvm management because it's too open source for their taste. You want sandstorm but you have to use Docker because of the same reason. That is not a new problem, that is life, it's like getting drafted and they give you a Colt M4 instead of a HK416 or a SCAR, the standardized tool is not always the best, in fact, it might very well be the worst.

Just use the tool that suits your requirements the best. In Linux, you're really spoiled when it comes to tools in general, and tools for virtualization in particular. You want to virtualize for games, so you need performance right? And you have an nVidia GPU, so you don't have the option of using open source drivers for games. Those two use case conditions make that kvm is the best solution for you, because you can have easy PCI passthrough (and don't worry about Bumblebee, it's only purpose is to save battery power, saving battery power and gaming at the same time is not an option anyway), and because kvm lets you run a virtual machine faster than it would run on bare metal. Only kvm/qemu can do that, only kvm/qemu offers a performance level that is way higher than any other type I hypervizor or virtual environment application. Kvm on a bleeding edge RPM-distro is at least 20% faster than Xen, which is the runner up in terms of performance.

If it's just for Linux native games, and for instance the Steam client for Linux, and you can use KMS GPU drivers (which on nVidia isn't very likely), lxc is the best solution, because again, the performance is way better than anything else like it, and it works out of the box. There is no sandstorm or docker support for the steam client, and there will never be. However, lxc doesn't require support by the application, it always just works. Lxc uses the very same virtualization technology as docker, and is quickly evolving towards using all the virtualization functionality offered by the latest Linux kernels, which is currently only used by sandstorm and chromium, but in the future, will also be used by docker and of course lxc (and lxc will probably evolve much faster than docker, because docker doesn't have to evolve fast, because the enterprise market, which is targeted by docker, is mainly still on the ancient kernel 3.2, because that's a kernel that they can have in a hardened version out of the box, without actually having to pay a guy that knows how to harden a kernel, and that's the kernel used by a distro that they can have configured out of the box, without having to pay a guy that actually knows how to set up and properly maintain a bleeding edge linux environment, so by the time the enterprise world is on kernel 3.17 and can actually use the technology that is now being used by chromium and sandstorm, it's the year 2016/2017, so docker still has about 18 months to catch up, and that's an eternity in open source...)

Bumblebee is dependent on two GPU adapters, one iGPU by Intel, and one nVidia mobile. If you don't have those two adapters running on the same system, there is no chance of getting it to work properly, and even if you have both of these on the same system, most of the time it doesn't work properly, well... because nVidia and Intel don't always offer a problem-free experience on Linux, nVidia because they hate open source and are evil mentally retarded, and Intel because they can't get their open source-only drivers quite right because they've fired all of their (100+) graphics drivers developers in the US because they weren't performing well enough, and they moved the whole driver development program over to China, where they only have a very small low budget team, that has only been working on the drivers for about a year now, during which time Intel has made them do a lot of development-from-absolutely-scratch for Atom-core stuff and for OpenCL development, where Intel has a couple of years of development to catch up.

1 Like

Wow, that's the best answer I've ever had in every online community I frequent :)

Linux containers, are great, I am discovering them and definetly setting one up, the only bummer is that having to use KMS drivers would force me to use the nouveau drivers, which are pretty good if you consider that it's just a lot of reverse engeneering, pretty bad if you are an end user :)

But I still have a question about KVM, if support for passing the hardware through and the performances are great, why does no one uses it, what's the catch ?

In my early days of wanting to game on Linux, I've heard a lot of people scream when I made a sentence with the words "virtual machine" and "gaming". So I just sticked up with a second Windows partition on my machine, then I felt in love with dwarf fortress and wiped out the partition...


Now that I really want to play some of those new games I am considering making some space for windows again. Can KVM give me some decent performance on games?

 

 

Running the test only on Fedora was a bit biased. I would like to see a benchmark or Xen and KVM on Suse as well to see if the results differ. I assume KVM may still win but Suse has done a lot of work with Citrix  and Xen.

Note: I am a Citrix fanboy.

Xhat do you mean nobody uses it lol?

It's been EXTREMELY popular for years. Most techies that game have been using kvm or Xenserver with PCI or VGA passthrough for at least 4 years now. On a well configured system, there is no noticeable performance difference between a bare metal install and a PCI passthrough appliance in kvm. Some things are faster, other things are slower, but overall, the performance is about the same, and with the new movement to clean up the kernel (with is starting with kernel 3.18), it will probably only get faster.

There are some tricks you can do with kvm/qemu. For instance, the linux version of CS:GO runs in a wrapper, that artificially caps the performance of each core to about 35-40 %, instead of allowing full performance. That is just one of those "feature parity" limitations to make sure the linux version doesn't outperform the Windows version. Typical for a corrupt commercial software console based entertainment company like Valve. The result is that on a Windows bare metal install, you'd have an fps in CS:GO of close to 300, whereas the linux version would only give you about 60 fps, because CS:GO is a CPU dependent game, not very GPU dependent.

What you can do however, is make CS:GO believe that it has more cores at its disposal than are actually on the system. For instance, you have a quad core CPU, and all cores can only be used for about a third of the performance because of the way CS:GO is limited in linux, and changing the scheduler limit in CS:GO would get you VAC-banned because it's probably considered a cheat, because it gives you a definite performance advantage over Windows software console users. That means that if you would have 12 cores, each about used for 1/3 of their capacity, that would equate a quad core system that's fully used. So you run a minimal linux kvm appliance with the Steam Client and CS:GO, and you set QEMU for that KVM appliance to emulate a 16-core CPU of the same family as your real CPU (very easy on AMD, can be challenging on Intel). Then you go into the preferences of CS:GO, and add -treads 16 -high to the launch options. Then you will have a much better performance, until Valve discovers it and limits the number of threads that can be used in the linux version of CS:GO (that might already have been implemented, I haven't tried recently). There is no official legal or illegal cheat list for VAC, so I don't want to try changing the scheduler limitations for CS:GO linux, because I don't trust Valve to be reasonable and not hate open source lolz... and this was just a creative way to bypass the built-in linux sabotage. Valve will never release a list of legal/illegal cheats, because they cheat themselves by nerfing the linux version all the time. By using QEMU however, there is no need to modify any software, and there is no violation of any rules, no terms of use violations, no EULA violations, no physical or software additions that give an unfair advantage, it's just running the program on a standard x86 system, albeit emulated.

Oh my, I've been using Linux for two years now, and I never considered KVM gaming, I want to take back my losted time ! :)
I'm KVM'ing right  now, it's crazy, how can we still "whine" about linux application "compatibility" when there is stuff like KVM/QEMU ! You sir just made my day/week/changed-my-computer-usage-forever...

Thank you for all these Wonderful answers :) 

my current server runs xenserver with various guest vms

Depends on what you use VMs for, but Docker is getting better all the time. If it continues with the current momentum, it will have taken over for a lot of things in another year or two. I think everyone has covered everything else.

http://www.markshuttleworth.com/archives/1434

I think Docker is missing a few key pieces yet but for a lot of stuff, it's really slick.

 

I don't know how to feel about the changes to Ubuntu Core updates.

Wendell, did you give Fc21 a try? I was surprised by the fact that it came out with a less recent version of both the kernel and Docker than is running (well) in OpenSuSE Factory for the moment, as RedHat has been pushing Docker like crazy. It may mean that there are still some pretty annoying problems with Docker and the kernel, but I didn't look at the mailing lists because I have no time for that.

I just got it. Giving the workstation edition a try. I do like that Linus is finally nudging distro maintainers into doing ws/server/container kernel tweaks.

This is 3.17.4 which seems reasonable imho? It may have some backports of things, not sure, but the lockup bug may have people more squeamish than they should be.

As far as I can tell the lockup bug is just the result of fuzzing. I dont know that anyone has encountered it in the real world. I bet it is something goofy like speedstep scaling across cores creating some timing issues. Or intel cant-prevent-it throttling causing weird race conditions. 

I will check out docker. Looks like the overlay FS stuff might not be working out of the box? But I only just started so I'm just talking out of my butt at this point lol.

 

Yup, you have to enable stuffs with boot parameters, and for the OverlayFS, you need 3.18, and that's not packaged yet (will be day after tomorrow though, so unless you can't wait and want to compile yourself, you'll have to wait to try it out lolz. I'm having itchy fingers but I don't really have time to compile it right now, but maybe I'll do it anyway...).

I'm having some lockup issues on 3.17 though, with a corresponding dmesg torrent that is over my head lol, not on AMD, only on Intel, but also on older Intel steppings, so it's not just an Intel problem. I think it's a compiler problem, and I'm going to blame it on RMS out of principle lol.