They show the problem that we currently have where maintainers wait at the end of the -rc cycle and keep valid fixes from being sent to Linus. They “bunch up” and come out only in -rc1 and so the first few stable releases after -rc1 comes out are huge. It’s been happening for the past few years and only getting worse. These stable releases are proof of that, the 5.13.2-rc release was the largest we have ever done and it broke one of my scripts because of it
Not really news of a question. Saw this pass by. On one hand, sounds not ideal. On the other hand, proves the old wisdom: “Never run a dot zero release”.
Yeah, with some many corporations backing Linux development, they have really gotten into the habit of pumping and dumping their code on the community, hoping that the upstream maintainers will maintain the code for them [corporations]. It is frustrating because a lot of them are doing stupid things with out of tree shims and since the ABI of the Kernel has been anything but stable in the last decade, they keep adding more code to help them maintain their out of tree stuff. The real solution is for them to either go BSD with that noise or properly document and opensource your stuff and use the existing kernel resources instead of re-inventing the wheel.
If any of you have checked out Rene Rebe’s YT channels, you will see that he has been having such a time porting over the Linux kernel to old architectures because there is so much breakage and new dependencies on new languages to support the kernel. A lot of time things are only intended for x86 or ARM which break functionality on other architectures.
On the one hand, corporate support has been a boon for Linux development. On the other hand it has caused issues with ease of maintenance and release of new kernels as we see so many point releases now that are meant to fix regressions caused by new “features”. It will be interesting to see what happens to the Linux kernel in the next decade. I have a feeling that we will start to see new kernel competitors that lean more micro-kernel and less monolithic kernel. Man, GNU/HURD really needs a win at this point.
Pardon my ignorance as I dont code, but pls enlighten me:
How much unneeded kernel code is inside a regular x86-64 Linux desktop (specifically a 6th gen Intel CPU)? Is it just as lean as it can run my system?
The Linux kernel is a file somewhere, right? Because of the Unix principle where everything is a file? Where does it reside, exactly?
I know we just gave up old intel 32 bit code and there was some resistance due to gaming, printer related stuff and functional legacy hardware. Do you guys see a future where Linux could be like MS, supporting every legacy device and legacy code?
That really depends. If you are running a binary based distribution then you have binaries for every common hardware driver usually. During the init process (boot), the initram[fs] loads a tiny file system in memory and begins loading the basics to get your system up and running. From there, it boot straps into the full system loading the Linux kernel and modprobes (probes hardware on your system and loads the appropriate driver for the system if it exists). You then get to a GUI (hopefully) and if you were to run a lsmod, you would see all of the kernel modules that are currently running on your system in order to boot. These are running in memory. What happened to those other binaries that your system does not need, they are sitting on the HDD of your system taking up space.
This only applies if the binary is compatible with your architecture. You will not get aarch64 binaries on an amd64 system unless you turn on multi-arch support or you are doing architecture cross-compilation stuff.
Keypoint about kernel modules - The Linux kernel gives you three options when compiling from scratch:
1.) Build the module directly into the kernel [read: when the kernel is loaded into memory, that module is auto-magically loaded because it is considered one in the same as the kernel, you cannot unload this module].
2.) Build the module as a module that can be dynamically loaded and unloaded [read: the module is called and loaded only when needed. The module can be loaded and unloaded at will].
3.) Do not build the module [read: If the module is not compiled as option 1 or 2, you will never be able to load that module with out a rebuild of the kernel or rebuild of the module].
All that to say that if you compile from source and build only the modules that you need, you will end up with a smaller kernel and/or less wasted disk space. Compiling stuff into the kernel leads to a larger kernel which will use more resting disk space and more RAM. Compiling stuff as modules will lead to a smaller kernel but will consume more space than just compiling the module into the kernel but tends to produce a more stable kernel due to the ability to dynamically load and unload modules. A module crash may not take down your system when compiled as such versus compiling directly into the kernel. Just like option one more modules eat up disk space but unlike option 1, modules only use RAM when they are loaded. Option 3 allows you to capitalize using less disk space and potentially RAM by not compiling stuff that you do not need.
The source code is a a labarynth of files, yes. What you get on the other end once you compile the kernel is a binary file that is bootable by the BIOS/uEFI. While not required by all architectures, you will also get an initram[fs] file that tells the initial booting process what to load to get the system up and running as it runs as the initial rootfs in RAM to get the basic system up and running. If you have ever used ArchLinux and borked your system after compiling a new kernel because you forgot to run mkinicpio (to generate an initramfs) you usually get stuck at init or at a kernel panic because the system was missing information to boot the initial ram disk.
Depending on if your are using a BIOS/uEFI/embedded system (RPi) the kernel and the initram[fs] will bee in the /boot directory. How this is created depends on the platform and the complexity of your system but this could be part of the rootfs or this could another partition or disk that is mounted during the initial boot process. if you do an ls -al /boot you should be able to se what is there. Older systems used to link this information on the rootfs directly [read: /vmlinuz[.] and /init[.]
Out of all the Unix-like systems, the Linux kernel has been the best at supporting legacy systems and hardware, but this has changed in the recent decade. To slim the codebase and to mitigate bit rot, a lot of old pre 2005 stuff has been dropped from the kernel. Usually this is announced before being done so that if anyone wants to maintain the code officially they can, but if the code has not been maintained in ages and it starts causing major issues with the new code or prevents the new kernel from compiling or does not survive an ABI change, then it gets elected for deprecation. You can always compile an old if it supports that architecture, but you may not be able to get it to work with the new userspace, especially if there was a major ABI change and the userspace only makes use to the new stuff while the old kernel is aware of the old stuff.
In general, I think the official kernel maintainers will do their due diligence to maintain as much legacy compatibility as possible, but when people do not step up to do the heavy lifting, they have to cut their losses in order for new development and ideas to progress.