Securing steam and steam games

So, I migrated from Ubuntu 13.04 to Debian testing and now live without proprietary software on the system (well, except the radeon firmware) but that means no steam. I don't trust steam or the games more than any other proprietary software but I really want them and that's why I make a exception for steam and steam games.

But I don't trust them, like I said, so I want to make sure that they don't have access on any data of my system. That includes files and services (like dbus: gnome keyring, ...).

I know of SELinux and apparmore. There are two possible problems I can see:

  1. games write their configs to all weird places
  2. nothing prevents steam/games from accessing services over dbus etc

There is also chroot but I don't know if it makes any sense for this.


Do you have any security measures for steam? How do they work? What would you do?

The cool thing is that the steam client is actually a hacked up old firefox version (like v9 or something), so the apparmor actually works on it. I don't use AppArmor, but I do use SELinux, and it's in enforcing mode.

On Manjaro, I use Tomoyo and sometimes Akari for the moment, Tomoyo has a self-learning mode and is almost as easy to use as SELinux, but much less verbose, which is still the biggest downside of running SELinux in enforcing mode.

I do also have made a separate group for Steam, so that I can tweak the permissions. Another easy fix with great efficiency. This is probably the easiest and most efficient security fix for things like that.

I do have to say that I don't have Steam on my production machines anymore.

Just to make it clear: You're using SELinux with the reference policy and your own group for Steam.

What exactly is it doing then? Does it prevent file access outside of /home and does it prevent access on my documents, gpg keys or the keyring? And most importantly, how do you (so, SELinux) handle the games started by Steam?

I don't use non-OSS anymore on my fedora machines, but I do still use Steam on the Manjaro install on my gaming rig, which also has my windows gaming install, and nothing else, it's only for games, because frankly, proprietary software and DRM turns any PC into a gaming console, so yeah, I have a very expensive gaming console that used to be a PC lol...

But I do the following:

1. make a steam group, with only permissions to the /home/.dirs where the games and the steam client are and the /home/dir where the steam and game user data is located. This avoids any and all access to files outside of that permission group. I also don't allow the user group to have 7-perms on the steam/game dirs and .dirs, only 5-perms. That circumvents possible behind the back or incidental safety issues.

2. Tomoyo 2.0 will learn what system access Steam and games require, and you can manually enforce denial of access, which is what I do, I only allow the access that causes stoppage if I don't allow it, which is basically only the system ID data retrieval and some basic networking functions, but even with a bunch of denial errors for everything else, Steam and games still seem to work just fine. Tomoyo doesn't weigh as heavily on the system as SELinux, it's also not quite as safe, an alternative would be Akari, which has automatic enforcement like a real MAC, so like SELinux or Tomoyo 1.0, but doesn't require to recompile the kernel of distros that don't have the Tomoyo or SELinux extensions enabled.

3. I set Firewalld to blocking profile, which blocks all services, also excluding things like ipp for instance, and Steam seems to work just fine, so it's probably not as nefarious as it could be, but just to be on the safe side...

4. Steam uses only 32-bit libs and deps, few no_arch deps, so it's easy to compartimentalize without breaking stuff. Steam does have access to your Firefox bookmarks because it needs to store data in order to function and partly uses the same data location as regular Firefox. But that was never a problem for me, because I don't use bookmarks or cookies or any other local browser storage apart from a self-destructing very small HTML5 storage that is required. I use Zotero, which installs as standalone FOSS package in linux, and from there installs LibreOffice and Firefox extensions, and I use it primarily for research references, but also for Bookmarks, because it syncs on a trusted non-commercial server and with Zandy and Scan to Zotero I also have all functionality securily on my Android phone without gapps (both apps are available through F-Droid), and Zotero also saves a snapshot of the content that you add, so it's way more efficient than bookmarking. So that's my solution.

5. In general, I hate the fact that perfectly safe linux systems are being perverted by proprietary malware, which is why I'm not supporting it any longer on my daily use machines. US companies are discovering now what Tencent in China has discovered almost a decade ago, that you can go very far in perverting open source for commercial benefit, and that's what's happening now with linux if the users aren't careful. In fact, I think they are more careful than Valve and others had expected, and they will need to bring out their own console-like linux distro (maybe the future of Ubuntu?) to have the performance benefit of linux without the security benefit. I don't think it's up to the users to adapt to the industry, I think it's up to the industry to adapt to the users, so no more proprietary software on my machines, except on my gaming rig, which I consider consolized, in linux and in windows, and no longer consider a PC.

Finally came around to test some stuff. Currently I'm running steam in a chroot which seems to work perfectly fine.


  • steam and games are completely separated from the system
  • the files can be moved as a whole pretty simple
  • steam and games can't access my data
  • you can do adjustments to the chroot for gaming without affecting the system


  • it has access to dbus which can be a security risk if you use a keyring or other services via dbus
  • the network is not regulated
  • you have to keep track of another system

If you have some sort of firewall to block outgoing connections by default and keep keyrings etc locked it should be an acceptable security mechanism.

Why don't you run it in a LXC intead of a chroot, and just block the LXC in firewalld? It has to be able to phone home though, or it will lock your games, so you have to enable outgoing established related packets.

Because I didn't know of LXC. Sounds awesome. I'll give it a try.

Security issues with steam and linux, how much more paranoid can you get?

Why, it's commonly known that the Steam client sends your network config (including potential vulnerabilities of your system) and your real name in plain text and your favorites every time you connect, together with all your Steam data, so everyone with a packet sniffer to view and use against you. Many people have had their Steam accounts hijacked because of that, and it's also known that it's common practice of ISPs to snoop on TCP ports 27xxx (the Steam ports) and to throttle activity on those and mine the personal data and network data via those. This has been known for years.

Would you voluntarily publish your full network settings config, your preferences, your game server activity and your real name in plain text on the internet for anyone to see? No? Then secure your Steam client and don't use your real name in your steam account, and only buy games with a cash card, do not link it to your credit card. I use a package filter on the Steam network communication and paysafecards myself, because I'm well-informed, not because I'm paranoid. As much as I respect Gabe Newell, Valve is just a commercial company mate, hate to break the news but they don't care about the security of their users any more as any other company, they just want your money lol, you have to protect yourself.

I looked into LXC and successfully created one but I can't figure out how applications in the container can access the host's X server without something like VNC.


Like any container, ssh, vnc, vt:tty and vt:x.

If the Steam Client strats in an LXC and you didn't have to ssh or vnc, you're using vt:x, which is default.

Well, the problem is that steam doesn't start, like any other X application because I don't have an X server running in the container. I have problems understanding how it should work. A client in the container should connect to the X server of the container which then gets forwarded to the hosts X server?

Actually, as soon as you have an x server in the container (Steam needs X, how did you manage to install the Steam client in an LXC without having an X server? Didn't the deps resolve?), it's X I/O will "translate" to a host VT.

If you want to know exactly how it's implemented, take a look at:

If you don't mind running Ubuntu in the LXC, you can even use that to have everything pretty much preconfigured. It doesn't work with the 3.11 kernel though, for many reasons, well, it works if you have an AMD GPU and Virtualbox not installed. Kernel 3.11 is a world of hurt if you want to use the proprietary nVidia driver (and you do because you game and nouveau sucks for gaming), nVidia is so far behind on driver development that it's ugly. Oracle is a similar story. There are patches, but I would not recommend them, they might break dracut, and that's not a good thing. So stay with 3.4/3.8 LTS (nVidia doesn't really work on 3.10 either, 319 generally works fine below 3.10, 325 is a mess, 304 is slower and has less features but works on 3.10) kernels if you have an nVidia card and want you LXC to run.

Is your system AMD-V/Vi or VT-x/d compatible? If it is I would propose running a kvm container, even more secure, and even less overhead...

Steam apparently doesn't have xorg as dependency. I'd consider that a bug.

I installed it manually but I can't figure out how to start it... the steam-lxc script helps a lot. I might look into porting it to debian (requires python-lxc).

I'm still on 3.10 and with r600g, VT-d support and without VB so that shouldn't be a problem.

I thought KVM is virtualization? I would have to do a VGA passthrough which isn't supported by my motherboard.

Yup, the script is actually quite fun, the entire thing is python scriptable. I use it to integrate in python job scripting, for some temporary functions like subversion (that still has a potential security leak so I only fire it up when I actually use it), I use a LXC that is started up by the main job server. Very easy and low overhead secure solution. I use SVN backups because it respects the individuality of the different users on a system while not causing huge storage overhead, and it works fast and reliable, and cause almost no system performance or network overhead when it's active.

Took me way too long to get python3-lxc running. Installed it from git to /usr/local and had to set LD_LIBRARY_PATH=/usr/local/lib and PYTHONPATH=/usr/local/lib/python3/dist-packages/. Then it complained about the argument protocol="ipv4" for container.get_ips which seems to be not used anymore in the git version of python3-lxc and now it's in an infinite loop because it "Failed to get the container's ip.".

Enough stupid stuff for one day.

I'm not sure you can bridge your host connection on an LXC, but I'm not entirely sure, did you specify the network adapter in the CLI when you started the LXC? Maybe it's the new adapter naming scheme that prevents the LXC from identifying the connector? It would have been easier if your system were IOMMU-enabled, then that problem wouldn't be there. I can't understand Intel's and Asus' position on this, blocking IOMMU on k-CPU's and Z-chipsets and not using a simple IOMMU capable PCIe controller on motherboards is as stupid as Intel refusing to pull Mir patches or nVidia refusing to provide kernel headers. It's going to be such an issue when all those gamers that want to play the next gen Steam games, with IOMMU-unsupported hardware, are going to have graphics and network issues on expensive hardware, whereas it runs on cheap FM2-boards and cheap B/H-chipset boards. Those that opted for the X-chipset-2011 socket platform and don't have an Asus mobo with a crap PCIe controller, and those that bought generic cheaper hardware or AMD systems, are really lucky. That Intel and Asus keep blocking IOMMU on Haswell platforms, whereas Haswell has some great hardware virtualization functions in it, is just criminal.

Closed source corporate politics... yuck! I had to remove the nVidia-xorg's from my machines with the 3.11 kernel, and so many months down the line, nVidia still hasn't produced a functioning linux driver that even allows anything but runlevel 3, and if I run it on 3.10, I get 6-bit colour in a kvm container because their stupid shit isn't capable of more, even on a proprietary driver, and that's no worthy way of running Crysis 3. I hate when hardware manufacturers make stupid choices, and when I make stupid choices myself for that matter, if I'm honest I saw this coming at the end of last year, when I bought a GTX680 for my main gaming machine, in the naive hope that nVidia would shape up, I should have gotten a 7970, I would be golden right now, instead of being stuck with a 680 running at 122 MHz tops on nouveau. I've decided I'm flogging the GTX680 as soon as the new AMD GPUs are available. I'm so tired of dealing with uncooperative shortsighted hardware manufacturers, I don't think users should have to spend their time performing major surgery all the time to extract functioning proprietary drivers out of some hardware manufacturer's tight arse, it's time to bring out the hose and the syringe, and purge them of their technology-hindering constipation.

If there is one single feature that all mobos and chipsets and CPUs should have had for years, it's hardware virtualization.

An alternative would be to plug a cheap network adapter in an USB port and use that for your LXC. Maybe you have one of those lying around. It would be a quick and easy fix.

On the other hand, now that SteamOS is coming out, you might as well change that motherboard, because you'll probably want to run SteamOS in kvm next to your regular secure linux distro.

I modified the script a little bit to use libvirt and now it seems to work. And I might have found a bug in python3-lxc regarding set_config_item.

I totally understand your feelings when it comes to mobo's. I just want a good mobo with coreboot and virtualization but there ain't one - at least non for PC's. The server mobo's seem to be much much better and not crippled by UEFI...

Well, everything seems to kind of work now.

Do you by any chance know if you need two graphics cards to use vga pass-through? Just from the name it sounds like the host OS doesn't have any control and thus can't display anything...

Sad news I'm afraid, VGA Passthrough only works with 1 card on a kvm enabled system, so you need VT-d or AMD-Vi to make it work. And even on a dual card system, it won't work on Asus mobos because Asus uses a PCIe controller that doesn't connect to the CPU directly, and in order for VGA passthrough to work even with a dedicated GPU card if you have a Z-chipset for instance that blocks VT-d, you still need a PCIe controller that allows for a direct link. There are workarounds for this though, but those are not easy. It's not just a problem for virtualization, it's even a problem for a bare metal windows gaming machine: a B-/H-chipset mobo will offer up to 30% better framerate in games than an Asus Z-chipset mobo, because of that PCIe controller. A cheap AsRock B75-chipset mobo outperforms an Asus Z77-RoG mobo in any game on any platform.

Coreboot is limited to older standards, but there is a Gigabyte H-chipset board that runs coreboot with sandy bridge CPUs and supports VT-d. And yes, the main reason why I buy server and WS boards (mostly from Gigabyte and SuperMicro) for the moment, is because they have a coreboot compatible legacy BIOS and an open source payload that is 100% functional, much less security risk, but also much better user interface (UEFI is such a mess, it's easier to find a needle in a haystack than to find a particular setting in a UEFI BIOS), much better compatibility (custom payload opens new possibilities, especially for system control and management hardware solutions), ATA passthrough hardware encryption (kinda odd that that doesn't work that well with UEFI's anymore lolz), no preloaded BIOS-payload (UEFI has a pretty large memory to store it's own payload, which is rather scary, at least with legacy BIOS you can control the payload because it's stored on HDD/SSD/whatever user controllable and formatable storage, and a legacy BIOS chip doesn't have enough internal memory to do critical damage or load critical malware that could make spyware activity undetectable).

As to libvirt, yeah I oddly enough seem to need that for kvm machines too, which makes not much sense, except for the fact that it's a dep of virt-manager, which is the only simple GUI hypervisor with a completely GPL license that can handle kvm's, because Gnome-Boxes doesn't seem to be able to, which is a shame. I uninstalled the vmplayer I had on my system and recompiled the kernel without wmware headers, and suddenly I had a lot less problems (and wmplayer taints the kernel anyway, so better riddens, with nvidia-xorg not being compatible with 3.11 anyway, I now have an untainted kernel since quite some time, and it was a long time ago that I had that on a desktop/laptop PC).

Most budget B-/H-chipset boards, even for Haswell, still have a legacy BIOS though, it's not as good as coreboot, but it's not as dangerous as UEFI either, and as long as you don't have a k-CPU, those boards do have VT-x and VT-d. Also, almost all AMD systems have AMD-V and AMD-Vi, even cheap FM2 systems, and all AMD CPUs after Deneb are fully hardware virtualization enabled.

With SteamOS, prices for used Phenom II X4 and X6 CPUs will probably go up lol. More burst performance than FX, better load balancing because of individual L3 cache and shorter pipelining. AMD should really think about a Phenom III X8 imo, now that would be a beast, especially in conjunction with the new Southern Islands cards, but it's not going to happen lol.