Can you install straight up Linux on a computer?

After watching the Inbox 43 and hearing how Wendell would install Linux on computers, I wondered if it is possible to still do that.

Is installing Linux through a distro the only way now?

And also, what exacally are you doing when you "compile your kernel" I have heard this many times but never had to do it myself on any of the Linux machines I have made.

You can't just install the linux kernel if that's what you mean, you always need a distro. A distro is the operating system that is based on the linux kernel. The kernel itself is the code that makes the hardware understand the operating system.

However, you can build a distro yourself. It's called linux from scratch.

I didn't watch the Inbox, but in the very early days, there were a few distros that everybody used. Slackware was probably one of the first. I started out with SuSE, so that's when the linux kernel was around for a couple of years, it must have been 1994 or so. The first linux kernel appeared in 1991.

Back then, if there was something new, it wasn't always possible to download a package or an update. The analogue modems over PSTN were very expensive to use because they were so slow and needed hours to download something, and my parents wouldn't have that. So sometimes code was exchanged in print form, on wide matrix-printed perforated chain paper, and you had to type it all over and compile it or look for the differences and compile it.

Since the mid 90's, the size of the linux kernel has exploded. Now, it's much more of an assignment to even compile a kernel than it was back then, as there are much more features and an incredible amount of merged code, some of which is huge. Whereas you could probably compose and compile a linux kernel manually in the mid nineties in about a day, now you would need several months, so you don't really have that option any more.

If you still want to really compose a kernel from scratch, module per module, manually, and the compile it, there are other operating systems that still allow you to do that. Minix is an example of such an operating system. It's not monolithic, so you can dramatically reduce the kernel size, but it - naturally - also has much fewer features and little compatibility. Minix is not free though.

You could however experiment with an old linux kernel. Those are still available. You won't get much performance out of them, but it can be a fun experiment.

Thanks for taking the time to write out all of that, I am not that familiar with how Linux worked in the past, or even now lol. There's still some stuff I need to learn.

Take a look at arch linux.

For compiling the kernel you are pretty much telling what features you do and don't need and deciding if things are built into the kernel or if they are an external module that can be loaded depending on different criteria. I.E. if you need a certian option enabled so you can use certian chip set features you would usually compile that into the kernel but if you knew you were going to use different hardware or down the road you might need certian features from the hardware that you would have not needed otherwise for software you will be using then usually that would go into a module that can be called up or disabled when not needed. Most people just seem too toss everything into the kernel anyways it seems. But if you were working with older hardware or working with an embedded system you would get a major performance boost only building the kernel with what you need to use. Well that was long and I'm not gonna proof this and hope it made some sense.

Heh, the reason I usually don't bother commenting on things about linux, Zoltans going to do it better and faster.

I've used Arch :)

It's quite simple really. What people commonly call "linux" is actually two things: 1. the Linux kernel, initially made and now still governed by Linux Torvalds, and 2. GNU/linux, which is an operating system that uses the linux kernel. Pack the two of them together and you have a "distro", which is short for "distribution".

Linux is the linux kernel, the core code that links the hardware to software that makes that hardware do stuff. Back in the day, just like now, serious computing was done on UNIX systems. UNIX was not free, it was very expensive, and the hardware manufacturers supplied entire systems, and supplied a kernel with their systems. These systems were very expensive. Linux Torvalds wanted to have a UNIX system, but he didn't have the money to buy an enterprise grade system, he only had an i386 PC. So he made a kernel for i386 machines, so that they could run the UNIX operating system.

GNU stands for "GNU is not UNIX", and is the project of Richard Stallmann. It's the same idea as Linux Torvalds had, but 10 years earlier. Richard Stallmann wanted to make a free and open source clone of the UNIX operating system. UNIX was prohibitively expensive, so he decided to make his own version. However, GNU could not really be fully used until Linux Torvalds made the Linux kernel in 1991, because before that, there was no free and open source kernel to make GNU work on PC's.

What links both of them together, are two things: 1. the free and open source character; 2. the C compiler that Richard Stallmann made and still governs. When you compile your own kernel, or when you compile the parts of your GNU operating system, you use the C compiler to translate the programmed code into machine language. The C compiler is called GCC, which stands for GNU C Compiler, and is basically the compiler par excellence for everything that works as it should on the planet. The thing is, that GCC is also GNU licensed, aka the GPL license, that's what gives it it's free and open source character. Since it's GPL, there is no place in it for proprietary plug-ins. That for instance means that a hardware company that has developed a proprietary machine language for its products, like for instance PTX for nVidia graphics cards, can't compile the features that work through that proprietary language directly with the GCC, but that an additional (also proprietary) layer has to be added afterwards, which reduces performance. Most main hardware manufacturers have therefore supplied the control code of their hardware to the Linux foundation, so that the kernel code can natively support the language of that hardware and that it can be compiled with GCC.

Since the Linux kernel also allows firmwares, which are open source hardware manufacturer supplied bits of code that ensure compatibility, but that are not free licensed, Richard Stallmann has a problem with the Linux kernel, because he believes that software should not only be open source, but should also be free from licenses. It's very hard to understand that point-of-view in the light of hardware development, because hardware design patents are of course a crucial part of intellectual property that stimulates technological advancement, as it costs a lot of money to develop hardware. Because of that fallout between the open source community and the free software foundation, the FSF is developing it's own kernel based on the Mach kernel, and this kernel is free from any hardware related license (and therefore doesn't have much compatibility), and is called "Hurd". So the official distro of the FSF is called GNU/Hurd instead of GNU/linux.

Since the GNU operating system is entirely free and open source, communities and companies started making their own improved version of GNU. That has lead to the more than 1000 distros that are based on GNU that exist now. The most known distro families are Slackware, Gentoo/Funtoo, Arch, Debian, RedHat/Fedora, SuSE/OpenSuSE, and ROSA/Mageia/OpenMandriva. Slackware, Gentoo, Arch, Debian, RedHat and SuSE were the original distros that lived on. Mandriva was the result of the fusion of Mandrake and Conectiva, whereby Mandrake was a fork from RedHat, and now, Mandriva is kinda lost, but the Mandrake heritage is continued in the Mandriva forks Mageia and ROSA. The lines between some distros are rather faint. For example, the main RPM-based distros (SuSE, RedHat, Mageia) share maintainers and developers, and although they supply different tools out of the box, the tools are intercompatible, like for instance you can use yum in Mageia instead or next to urpm, you can pull packages from any distro's repos to use in another RPM-based distro, etc...

Some distros have completely different tools. Arch for instance doesn't use the standard bash UNIX shell (which is the collection of command tools that you can use in the terminal), even though all the bash commands also work in Arch, and you can easily alias commands. Arch also doesn't use any of the two major packaging formats (RPM and DEB). Some distros don't use packaging, and require the user to compile every bit of software from source, or use a hybrid system.

The most popular distro in recent times is probably Android, which uses the Linux kernel with a HAL (hardware compatibility layer) called Dalvik, which then runs a Java environment on top of it. This extra step has allowed Google to break the link to the GPL license, because they wanted to put a non-open-source operating system on top of the open-source linux kernel. You can however still install bash on an Android install, and use the linux command line, accessing in linux kernel functions directly, for instance, to gain root access and configure the linux kernel-native IPtables firewall to restrict access to the device behind your back.

That's pretty much the entire overview in a nutshell.

I'm going to try to contribute to this topic by providing a link to a free book about the linux kernel: http://www.kroah.com/lkn/

It's from 2006 (which is almost like ancient history), and it's based on the 2.6.18 kernel, but hey, it's free, and at least the concepts should still apply (haven't read it, just skimmed over it). Knowing history is very useful in understanding the present.

That was excellent. Very informative.

http://www.linuxfromscratch.org/ this is a guide to help you setup linux on your own, but you don't want to do that unless you are looking to invest some time, and then again if you want to learn how things work internally installing gentoo or arch might be easier. If you want the closest to vanilla arch will be a good choice.

That's actually a great site, thank you for the link.