A Philosophical Introduction to Desktop Linux (aka operation rescue Linus)

A Philosophical Introduction to Linux Gaming

Edition 1.0

11-3-2021


Table of Contents

1. Introduction to Linux
    1. Kernel
        1. Drivers
    2. Userspace
2. Linux Distributions
    1. File Systems
    2. File Organization
    3. Process Management
    4. Release Cadence
3. Desktop Linux
    1. Desktop Environment
    2. Window Manager
    3. Display Server
4. Software Management
    1. Compilation vs Installation
    2. Package Managers
    3. Standalone Software
        1. Flathub
        2. AppImage
        3. Binaries
5. Games
    1. Native Linux
        1. Vulkan
        2. OpenGL
    2. Proton
        1. DXVK
        2. WINE
11 Likes

Chapter 1: Introduction

If you are a Windows gamer thinking of making the switch to Linux, that initial jump can be very confusing. There’s an endless amount of how-to guides and listicles that will give you tips and pointers for how to best setup your Linux installation. All you have to do is choose from one of the results on the first page of Google, download it, install it, and you’re up and running. Maybe one or two things are broken, but more Google gets you some command line script you have to enter, and you’re good to go. Then you do a system update, and next reboot your desktop is gone, and all you get is a terminal with a blinking cursor. What happened? Who knows. You can go back to your list or guide but it’s a few years out of date and maybe there’s someone else saying how to do it a different, better, “more proper” way. Why are there so many different ways to do things on Linux? How are you supposed to know what the proper way is?

This is the problem with “intro to Linux gaming” guides and tutorials. Linux is so diverse and moves so quickly that any given set of instructions or commands rarely lasts very long. What someone coming from Windows really needs is a guide on how to think about and interact with a Linux system – not a precise set of instructions and commands to copy and paste. The purpose of this guide is to quickly get you up the learning curve of how to think about a Linux system, and hopefully will enable you to answer your own questions and know what the “proper” way to do things is, all without bogging you down in confusing jargon and lingo, and expecting you to become a professional systems administrator. Desktop Linux has a lot of great things to offer, but unfortunately those things are not well positioned to a non-advanced user. This guide will rectify that.

What is actually is Linux?

Linux is actually a kernel (insert ackchually meme here), not a full operating system. However, this sentence is probably useless to you, because you likely don’t even really understand what a kernel is. Perhaps the best way to think of it is like an engine in a car. Oftentimes one engine powers many different types of cars. Engines are important, the mostly highly engineered part of a vehicle. They are supposed to be reliable, never break, undergo slow changes when manufacturers do update them, and will power many and many generations of cars. The Linux kernel is very similar. It is the thing that ties together the different parts of a system. Any program that needs to access memory, has to go through the kernel. The kernel turns a click from your mouse into an actually actionable event in software, enabling a game to listen for that click and do something. The kernel manages processes and system resources, and helps your hardware all inter-communicate. It is the heart of all the software you run.

What does this mean to you as a soon to be Linux gamer? Most notably, the kernel contains almost all of the drivers for your system. Rather than installing drivers for hardware after you boot Windows, with Linux, it should automatically just find and detect all your hardware, and load the appropriate drivers directly from the kernel. This means you want to be on a recent kernel. If the Linux distribution you are considering doesn’t support a recent kernel, it may not support all your hardware. You can see what a recent kernel is here:

https://www.kernel.org/

Mainline is the most recent release, but stable usually isn’t too far behind. Unless you have brand new hardware, stable or newer should be new enough.

The other noteworthy thing about the kernel is that while it contains most of the device drivers you will need automatically, there is one big, glaring exception: Nvidia graphics drivers. Because of licensing issues, all code in the Linux kernel needs to be free and open source. Nvidia graphics drivers are not, and therefore, they cannot be distributed in the kernel. While there is a free and open source Nvidia driver in existence, is is community made, and not suitable for gaming. However, it usually included in most distributions so that you can at least boot to the desktop. That means if you have an Nvidia card, the single most challenging part of Linux is going to be removing that community driver, and installing the Nvidia one. Most major distributions have now automated this process, but it is still one of the more common points of failure.

Userspace

A kernel alone doesn’t compose the operating system. There is another component called “userspace” that sits on top of the Linux kernel. If you see references to GNU or GNU/Linux, userspace is the part that is being referenced. Userspace alone is mostly irrelevant to you, but there is one important thing worth understanding about it. While the kernel handles mostly hardware, all the software you run will depend on userspace. It contains compilers, libraries, and other system management components that most desktop software sits on top of. Things like compilers and libraries move are developed very slowly, perhaps even more slowly than the Linux kernel, with the explicit purpose of not breaking things. However, if you try to run software that relies on a newer user space library that you don’t have because your user space is too old, that can prevent it from running at all. We’ll talk a little more about this later in software management.

Summary

OK, so how does this work, in a birds eye view?

At the most basic level, you have your hardware. Then, the kernel sits on top of the hardware, and handles all hardware interactions. User space software – mainly compilers, libraries, and other system tools, sit on top of that kernel, and interact with the kernel to actually do computing. Desktop software sits above user space (technically in it but at the very top), and that’s what you interact with.

The version of the kernel affects what hardware you can run. The version of user space utilities and libraries affects what software you can run. Both are involved in picking the right Linux distribution.

3 Likes

Chapter 2: The Distro

no – I have no idea where the “o” in distro comes from

We’ve already established that there are different versions of the kernel and user space. Furthermore, there are different ways to manage various background computer processes, like file systems, boot up, initialization, security, etc. Most of these things don’t particularly matter to the desktop user. However, the sum of all these parts, and the differences between them, is what makes a distro. This section will briefly touch on them.

File Systems

By default, Windows uses a filesystem known as NTFS. NTFS is supported by the Linux kernel (that is, the drivers to process files stored in the NTFS system are integrated into Linux). However, NTFS is not a particularly efficient or fast file system, so most Linux distributions do not ship with NTFS as the default filesystem. Instead, many Linux distributions ship with filesystems such as ext4, XFS, BTRFS, or OpenZFS. If you don’t have a reason for needing a specific filesystem, you should just accept the default suggested by the distribution – it will likely have little impact on your experience as a user. If you have specific needs, you should do independent research on that filesystem, and find a Linux distribution that supports it well.

File Organization

Probably more important than the file system is the file organization. File organization on Linux is very different than on Windows, and understanding the basics of file organization on Linux, and how it differs philosophically from Windows will save you headache and problems. While Windows uses drive letters such as [C:](/C:/) to denote the base location of an installation, Linux simply starts at what is known as the “root directory”, which is simply denoted as slash: /. Everything descends from this slash, and is organized in a pretty specific way. While the root file organization can vary slightly, conceptually there are a few things that are well conserved among most Linux distributions. The most important concept is that of your files, vs system files. Linux needs a lot of configuration files, programs, logs, and other files to operate seamlessly in the background. These files are not meant to be interacted with by the human user, except under specific circumstances. They are referred to as “system files”, and can be found in the litany of folders under /. Access to these files is restricted by default, so that you don’t accidentally mess up the system, and so that malware cannot break the system or do other malicious things. The sole exception to this is the folder /home. The /home folder is where each individual human user gets their own folder. The human user usually has a subfolder under /home, so for this author, that is /home/colin. In my own folder, I can create or delete or run files, folders, and programs, and no one else on the computer except the system itself, can mess with my files and folders.

The takeaway here should be to try and do as much as possible in your home folder, and avoid changing the system files. System file changes induce system-wide changes, and all users of the system. While on Windows, almost everything installs into C:\Program Files, many things on Linux will install directly into your own home folder. We will discuss the best way to install software, and how to choose whether to install it to the system or to your home folder, later on.

Process Management

Just a few words about process management. There are a couple of things to know. There are two types of processes. There are user processes, which are programs that you, the human user, start and run and close. These might be web browsers, video games, or other desktop software. Additionally, there are system processes, which run “in the background” and keep running even when you log out. These do things like handle networking, run the actual graphics software so that you can see the login screen, etc. Sometimes, they only need to start, do one thing, and then close. Most Linux distributions manage these with what’s called a “daemon”, which is just another term for a process that runs in the background and manages certain tasks. The systemd daemon (d at the end of a word usually denotes daemon) is what manages the startup of tasks, as well as their monitoring and management, on most distros. Some distros don’t use systemd, and use older ways of doing things. Some Linux users have strong opinions about systemd and prefer other process initiation and management systems. If you don’t know what any of this is, and don’t have strong feelings about it, you should ignore how your distro handles process management, and just accept its default.

Update Frequency

The next thing to know about the Linux distribution is how it handles update frequency. There are two major philosophies, and they are frequently compared and contrasted, although perhaps not in an obvious manner. One philosophy was the longtime standard, and is called the “stable” release. That means that when the distribution developers are preparing a release of the distribution, they pick a moment in time to freeze the software and kernel versions. That means that any new features in software, or drivers in the kernel, since that release, won’t be in that distribution until they release a new version of that distribution (with the exception of bug fixes and security fixes that are backported). This can be especially problematic for gaming because the software is constantly improved, and drivers are constantly updated. It is, however, often preferable for servers or enterprise environment, which is where Linux has traditionally flourished. Many times, there are years between stable releases. Examples of traditional “stable” releases are Debian, Ubuntu, CentOS (RIP),

The alternative to the “stable” release, is the “rolling” release. The rolling release tends to receive software updates and kernel updates as they are released, or shortly after they are released. This has some upsides and downsides. For gaming, it means you have the latest drivers and software sooner. This can be absolutely critical to playing games as they are released, or putting in a new graphics card and having it work properly. The downside is that these rolling release distributions can be less stable, and the level of stability is not always the same. For instance, Arch is notoriously bleeding edge, while Manjaro is usually a few days behind Arch. Other rolling releases like openSUSE Tumbleweed use sophisticated automatic QA systems to try and prevent bugs. Regardless, these distributions will never be as stable as “stable” releases, but most gamers find them worth the occasional bugs.

Summary

Your choice in Linux distribution will incorporate a few philosophical aspects of your Linux system. These are the filesystem in use, how the files are arranged, how processes are managed, and how frequently software is updated vs how stable the distribution is. While the first three are probably not critical to a gamer, it’s worth knowing what they are. That said, the most important choice you can make as a Linux gamer is whether you go with a rolling release, or a stable release. For reasons highlighted above, most gamers prefer rolling releases. The next section will discuss choices that are not necessarily intrinsic to the distribution itself.

4 Likes

Chapter 3: The Desktop

Linux and its variants are the most widely used operating system in the world. It powers almost all servers (i.e. the cloud), it powers almost all phones (Android), and it powers an endless amount of embedded devices (smart appliances, car infotainment systems, Mars drones, etc). However, the one area that Linux does not dominate the market in is the desktop computing experience. It is likely because desktop computing has to be resilient to all sorts of user interaction that could potentially break it. So how do you set yourself up with a Linux desktop experience that is robust and worth sticking with? This chapter will detail three things and hopefully explain some terminology you may have already seen around the web. We will cover the display server – what it is, how to choose one, we will discuss the desktop environment itself, and we will discuss the window manager. Finally, we will summarize how these components interact and how you should go about choosing them.

The Display Server

PC stands for “personal computer”, which is a term that came around from early computing days when most computers were mainframes, where each user had a terminal (think monitor and keyboard) but shared the backend computer with many other users. The personal computer became popular because it made computers small and inexpensive enough that everyone could have their own. However, Linux inherited many aspects of that shared computer design, and the “display server” is one of them. The display server is what actually draws graphics on the screen. If someone writes a program that opens a new window on the screen, they must send a request to the display server to actually draw that window. The display server knows where your monitors are, what their resolutions are, what each pixel on the monitor is doing at any given time, etc.

There are two primary display servers on Linux. There is “X” or X11 (also sometimes referred to as X Server). X is an absolutely ancient piece of software to still be in use today. It can trace its roots back to at least 1984, and has been at version 11 since 1987. If we scale computing to the age of the earth, X is the equivalent of several billion years old. X is the default display server on many Linux distributions, although this is starting to change. In many ways, X is monolithic and inflexible. It was not designed for modern computing hygiene/best practices, and represents a security flaw for some uses, because hijacking X lets everything on a screen or any user input be hijacked. Additionally, X was designed well before modern animations, graphics drivers, and screen resolutions. Since X development has slowed significantly since the 80s, many distributions and desktop environments have taken it on themselves to extend X. That results in a fragmented ecosystem where not everything works predictably across systems.

To allay these concerns, multiple groups, primarily those who were doing the most work on X, decided to develop a new display server called “Wayland”. Wayland is philosophically very different from X, and represents a major paradigm shift in the Linux desktop. The notable thing about Wayland is that Wayland does fewer things, but does them better. Wayland creates a standard that can access and extend in an intelligent, interpretable manner. It provides greater security from malware and other malicious code, and it should bring support for graphics to the 21st century. It is widely accepted that Wayland is preferable for the modern Linux desktop experience. Wayland has been in development for some times, and has hit a number of roadblocks. First, Wayland develops had to make it backwards compatible with X. Software that relies on X functions runs in a Wayland implementation called “Xwayland”. This means backwards compatibility is easy and seamless to the end user. Furthermore, desktop environments had to rewrite their window managers to be compatible with Wayland. Also, graphics card companies (Nvidia, AMD, and Intel) had to support the Wayland protocol. Finally, all the pieces are pretty much in place, and early adopters should be able to choose Wayland if they desire. There are a number of Wayland and Wayland-implementation bugs to be ironed out, and X is probably a more stable experience, but is less capable for non-integer scaling of resolution (i.e. if you aren’t running 1080p, or some multiple of 1080p like 2260p). Most distributions now support both X and Wayland, and switching between them is usually quite painless, so this author would recommend trying Wayland first, especially if you have a non-integer scaling display. If you encounter bugs, switching to X can usually be done through the login screen.

Desktop Environments

Choice of desktop environment is a long-standing holy war in the Linux community. The desktop environment encompasses the start menu, taskbar, theming/look and feel, and default applications that you will find on your PC when you install Linux. Of all the aspects discussed so far, this is the one that will impact you the most in day-to-day usage of your new Linux machine. On Linux, there are two primary drivers of desktop environments: GNOME, and KDE. While there are other desktops like Budgie, Cinnamon, and more, they desktops do not have nearly as wide of support and testing as GNOME and KDE, and so it is recommended to avoid them for beginning users. This section will highlight what the desktop environment is responsible for, and will compare and contrast GNOME and KDE philosophically, but will not make any recommendations.

One most Linux distributions, the desktop environment is primarily responsible for “drawing” the desktop. The components of the desktop are the start menu and taskbar, or their equivalents, the file explorer, the system tray, system notifications, various desktop widgets, often a system configuration program, and various system tools like a calculator application, a notepad application, and so on. Because the role of the desktop environment is so huge, both KDE and GNOME have developed libraries that can be used to create applications that fit in seamlessly with their respective desktop. For KDE, this is called “Qt” and for GNOME, this is called “GTK”. Apps built in Qt will fit in better with KDE software, and apps built in GTK will fit in better with GNOME software. While attempts have been made to make each others apps look OK on the opposite desktop, they still usually stick out.

GNOME tends to be the default desktop environment on most Linux distributions. It has strong corporate funding from Red Hat (now owned by IBM), and therefore is often considered the most polished, bug free experience. However, GNOME is also known for having strong opinions about how the desktop should be, and for not being particularly friendly to options and extensions. KDE is usually not the default, although it is available on every major Linux distribution, and is primarily community backed. It is perceived as more extensible, having more options, being less polished as a result. Oftentimes, it is compared to Windows, and is thought to be more friendly to people coming from Windows. To install KDE, one must usually select it during the distribution installation (such as in openSUSE), or choose a variant of a distro that has it installed by default (for instance, Kubuntu instead of Ubuntu has KDE installed by default). It is recommended to try both in a virtual machine, if you aren’t certain which you prefer.

This section needs images

Window Managers

One quick note about window managers. While GNOME and KDE provide their own window managers, one can also use different window managers called “tiling” window managers. These do away with traditional window decorations at the top, and instead always fill the whole screen with windows, and as more windows are added, readjust their position. These are generally advised for advanced power users who want maximum efficiency, but have started to become more mainstream. They can generally be installed regardless of distribution.

Summary

In this section, we discussed display servers, and what they do. As someone planning on playing games, you might have cutting edge display, and therefore X vs Wayland may be relevant to you. We also discussed what the desktop environment encompasses, and how to find one you might like. Finally, we mentioned what tiling window managers are, and why some people prefer those.

3 Likes

Installing vs Compiling

There are actually two steps involved in getting running software. The first step is known as “compiling”, and this is when the developer of the software converts all the source code of the program into the program itself. On Windows, this results in a .exe file. On Linux, we simply call the equivalent file a “binary”, because it is in binary code that the computer can interpret, but not humans (easily). Oftentimes, a single .exe or binary is the result of hundreds or even thousands of files of source code, and some of that source code calls libraries that may or may not even exist on your computer.

Once the program is compiled, it can theoretically be run on a system. So what then, is the process of installing? Well, there are a few reasons we typically (but not always), need to install software instead of just running it. As mentioned above, oftentimes software calls functions in libraries that aren’t normally available on a system. Therefore, those libraries need to either be provided with the software, or installed separately. An example of this would be video games needing DirectX installed. Sometimes, if it’s a very common library like DirectX, you are expected to have a standard installation of DirectX that is available in the same spot, and so all software that needs DirectX knows exactly where to look for it. Other times, if the libraries are less common, they are distributed with the software so it knows exactly where to look for it. On Windows, these are often .dll files in the folder with the binary itself. On Linux, libraries are usually system-wide, so if you install something through your distributions package manager (apt or zypper or Discover), it will install the necessary libraries (called “dependencies”) along with the software, in a system-wide way. If you already have those necessary libraries, it will skip installing them again.

Another advantage to installing is that you can choose where to keep the files so they are easier to manage. On Windows, programs are usually installed to C:\Program Files, and a shortcut is made from your desktop or Start Menu to the .exe file in C:\Program Files<somedir>. On Linux, the installation location depends. If it’s something that’s considered a core system application, its often installed to somewhere like /usr. From there, your Linux system will link it to your desktop or start menu. However, if you place software yourself in a local folder (somewhere under /home), you’ll usually need to add those desktop and start menu entries yourself. So that is the difference between compiling (turning source code into a program) and installing (making sure you have the right libraries, and putting the files in the right place).

One very important thing to note is that if you download just binaries on Linux, they will not be “installed”. Some distributions have a / home//bin folder to place downloaded binaries into. Others recommend that you use the root-level folder /opt instead. You can run a non-installed binary by either double-clicking on it (your distribution may or may not prompt you to make it “executable”, which means allowing it to run), or navigating to it in the command line and entering ./<executable_name>. You can also create a start menu or desktop entry depending on your desktop. That is simply the equivalent of making a shortcut on Windows. Another thing to note is that if you download pre-compiled binaries, they may look for libraries you have in different places. That’s why sometimes you need to compile the binaries yourself, so that the compilation software knows where those libraries are. As a gamer, you should never really have to compile software, as all commonly used software is usually available in a pre-compiled format that is compatible with your system.

Package Managers

Most Linux distributions come with two package managers. One is the default distribution package manager that is accessed over the terminal/command line. On Ubuntu, this is apt, while on openSUSE this is zypper, or on Arch this is pacman. These are just programs that manage the installation of software on that distribution. This software checks a software repository (a place in the cloud software is stored) for that distribution, and installs from it. Software on a repository for your Linux distribution will know where to install, and will also install necessary libraries and other dependencies. Software needs to be packaged for each individual repository. That means for an app like Steam or Discord, someone needs to package it for openSUSE to have it available on openSUSE. The Arch or Ubuntu packages won’t work, and won’t be on the openSUSE repository. Sometimes there are auxiliary 3rd party repositories you can use depending on your distribution. Anything installed from a repository will also auto-update when you run your update program.

There is also PackageKit available on most distributions. This is the backend package manager for the GUI stores. GUI stores like Discover use the native repository for your distribution, but simply use a different program to install it. For most distributions, there’s no major difference between using a GUI powered by PackageKit, or the terminal/command line program, to manage your software. It is worth pointing out that installing software from your distributions repository is almost always preferable if it is available, because it will handle all libraries and other dependencies, and automatically make shortcuts.

Standalone Software

In the even that software isn’t available through your native distribution repository, or any auxiliary repositories for your distribution that you have enabled, you may have to download it and install it manually. This can go a few different ways. You can download the binary directly, or an installer for specific distributions, or you can download a Flatpak, Appimage, or Snap.

If you download the binary directly, it will run out of wherever you place that binary. Binaries are usually distributed in .tar.gz format, and must be extracted and run. You can place that binary wherever you want, and run it from there, but as mentioned before, most people have a subdirectory in their home folder, or place it in /opt. If you want it on your desktop or in your start menu, you must make a shortcut to it, and it will also not be auto-updated. This is the least preferred method of installing, because it is harder to manage and requires more user intervention. The software may also simply fail to run, if it can’t find its dependencies.

If you download something like a .deb or .rpm, these are specific installers, similar to .exe files on Windows, but for specific Linux distributions. For instance, .deb is for Debian or Ubuntu, while .rpm is for Fedora or openSUSE. These can be installed by double clicking, or through the command line. If they are the only option available and you have a compatible system, they are nice, but if you are running a more obscure system that doesn’t support them, you are simply out of luck.

Recently, there have been developments in the Linux space that are new ways of distributing software without using distribution repositories and distribution specific packaging. One of these is called AppImage. AppImages are basically the equivalent of Windows .exe’s. When you download them, they contain all the dependencies they need. Because of this, they can be easily run on any system or distribution without having to worry about compatibility. There is also software called “AppImageLauncher” that will essentially install this software for you – moving it to a predetermined location, and adding desktop and start menu shortcuts. However, the downside about AppImages is that there’s no central repository, no easy way to find them, and no auto-updating. They also lead to lots of wasted storage, because many of them contain the same dependencies, even if you already have those dependencies on your system.

There is also a distribution method called “FlatPak”. FlatPak is similar to AppImages in that they are able to run on any Linux distribution. However, one advantage of FlatPak is the “Flathub” repository that contains most of the known FlatPaks, and is compatible with many GUI software management tools, and also enables auto-updating. Furthermore, FlatPaks will share dependencies if they can find them, and so that can save on space. FlatPaks also will also attempt to sandbox the application depending on configuration settings, which can prevent the application from seeing sensitive files. You can manage this configuration with the application Flatseal. Sandboxing can occasionally lead to things like the application not respecting your system-wide theme, or being able to see files you want it to see. Snap is similar to FlatPak, but right now only really used on Ubuntu systems. If a package is not available via your distributions repository, the Flathub repository and FlatPak is generally the next-best way to install.

1 Like

Reserved: Chapter 5

Reserved: Summary