Why the Linux Kernel Design isn't Outdated

Okay, well, that didn’t last long… But I found this article and figured I’d share it with you guys:

I think it’s a load of crap, and here’s why:

  1. Having multiple address spaces for drivers doesn’t make them more secure. They are still installed as modules, and modules can be exploited quite easily.
  2. Linux is technically modular–just not as modular as a traditional microkernel. If it wasn’t, you wouldn’t be able to install proprietary drivers. All you have to do is change the “y” in the defconfig to an “m” and it will that specific driver as a module and assign it its own address space. It’s done on Android all the time.
  3. They mention that C is essentially a dying language. That’s a bunch of bull. Languages are good forever. Only standards change. They mention kernels getting written in C# and C++ which kind of makes me laugh as neither will deliver the same performance on modern hardware as C.
  4. The structure allows for an increased amount of peer review. You want to submit an open source driver, you go through the review process. A lot of microkernels don’t undergo this.

I’m not by any means saying it’s perfect. Obviously it isn’t. But I’m so sick of this monolithic vs microkernel debate. People often bring up the crappiest reasons for choosing one over the other, and I just wanted to clarify some points that people often overlook.

3 Likes

Don’t get me wrong I began reading the article, but as soon as I saw that the guy started with reddit something blabla, I was finished.

2 Likes

Regarding c vs. c++ vs. other languages… :

I feel like c is the right choice. I’m currently working with a huge c++ codebase and looking at work that supposedly “top talent” wrote… It’s got so much unnecessary complexity everywhere it’s not at all funny.

Similarly, in my previous job I used to work almost exclusively in Go, on one hand having the compiler do escape analysis for your closures and having GC, and having it insert bounds checks automatically feels really nice, but I’d like to hold the kernel to a higher standard of coding where I’d expect the developer to understand how to make memory usage of a driver deterministic for example.

In terms of languages and performance, whichever language you’re using, you’re bound to look at some intermediary -s compiler output, and the output of a profiling tool at some point… C has the upper hand there.

Also, I’m 100% certain there’s folks writing kernel drivers in various languages and not checking their work in upstream – it’s not technically impossible, it’s just strange.

3 Likes

“outdated”

That’s rather irrelevant really. In comparison to the bleeding edge of academia, the Linux kernel was outdated when it began. But the real question is whether it performs useful work, is relatively secure, and whether it performs relatively well.

I respectfully disagree, although it isn’t so much separate address spaces as running the drivers in user-space as opposed to kernel space. If you’re running code it is fundamentally more secure to run it in an environment with restricted access.

Of course running a secure driver in the kernel is better than running an insecure driver in user space. There is a performance penalty to running in user space, but for unimportant drivers I’d happily pay that price - for servers I’d happily demote display drivers (and similar) to user-land.

Perhaps the ideal compromise between a microkernel and a monolithic kernel is to have the ability to run modules as either kernel or user modules.

2 Likes

The answer is a resounding “yes” for all three.

/thread

3 Likes

Well of course! I didn’t think it needed saying it was so obvious.

Back in the early 1990s, I could either get work done with an old 286 PC running Windows 3.1 or a serial terminal attached to a Xenix machine. I picked the terminal and haven’t regretted it or seen an overwhelming reason to switch since.

Ok, thank you for all suggestions!

Steam covers part of it, but is there a more general solution like nVidia SHIELD?

Just a heads up. I think you posted in the thread.

I wrote a decent sized project (around 1500 lines) in Go, and of course had issues especially because it was my first major project, but never found it overwhelming.

Now I’m working on C++ because a lot of places like to see it on your resume for internships and what not, and some of my 100 line programs are making me feel deep despair on the inside. XD

Yes and no.

C is little better than assembly language from a security perspective. It lacks many modern security features, and whilst, sure - it is fast if written by someone who knows what they’re doing - the speed issue with software can generally be solved by scale and/or better compilers.

Security can not.

For a general purpose OS, security is FAR more important than speed in 2018. The average end user desktop spends 99% CPU time idle most of the time. Anything super speed critical is offloaded to dedicated hardware these days anyway - and if you really do need to extract the very last ounce of performance from your hardware you probably shouldn’t be using a general purpose multiuser, networked OS for it.

Also, performance hot-spots are where you need to optimise. Writing the WHOLE PROJECT in C for speed is (IMHO) misguided. Write in something more secure, profile, then re-write the performance critical paths ONLY in something low level and dangerous.

Software is hard. Its time we came to terms with that and admitted that writing stuff in low level dangerous languages that isn’t performance critical (and not ALL of the kernel is performance critical) is a bad idea. Also, writing stuff in C before you determine that you can do a better job than a higher level compiler at task X is also… premature.

2c.

3 Likes

I agree, however most security concerns with C can either be fixed by either changing the standard, or fixing conditionals. I have yet to see an issue that was truly the languages fault and not human error.

I guess it’s all human error when you get down to it though…

1 Like

If human error is common with the language it is a failing of the language IMHO.

Humans make errors.

This is an undeniable and unchanging fact.

Computers should deal with that in a manner that doesn’t result in easily written, yet massively exploitable and difficult to find security problems.

We can do better. We had the languages to do better back in the 60s and 70s (e.g., Ada).

Yes there is a speed trade-off. However, I’m quite sure nobody who has been hacked and lost their shit cares that their machine would have been 10% slower if only it had been written using something safe.

edit:
Maybe i’m showing my age and jaded outlook a bit. But say 20+ years ago i was totally all about low level languages for speed, and high level stuff (like C, even) being for wimps, etc. But as i’ve been in the industry for longer, seen the damage that is the end result of using the equivalent of stone-age programming tools to build things… I truly believe it is worth making a bit of a speed trade off for security and correctness.

We can build or buy faster hardware. We can’t fix security by throwing money at hardware - and really, PCs have been “fast enough” to get things done for a good 10+ years now for the vast majority of end user workloads. It’s time to focus on security and correctness now. Make that speed trade off if we have to.

Developers need to understand that virtually all developers are crap at writing secure code in low level languages. Either by nature, or by circumstance (“I just want to get it up and running, i’ll fix it later” - due to deadline, it being 2am on a weeknight, etc.)

We have ~50 years of proof that writings things in C is probably a mistake from a security standpoint. 50 years of C and we still can’t reliably write secure code in it.

Most of the speed comes from picking the correct algorithm in the first place anyway.

2 Likes

I think this is it.

Softwave wise. Every idiot on the planet is coding garbage.

Hardware wise. A Small group of massive companies are making the best engineering the planet has ever seen.

And that hardware has flaws and they cost billions. Look at meltdown and spectre. The worst in history.

Now look at coders. The dirty chip programmers that want to be.

Hardware is controlled by a small group.

Software is controlled but a bunch of languages. Compilers and interpreters. That every one on the planet uses

2 Likes

I definitely see where you’re coming from, and you certainly cover a lot of great points, but no matter how many steps we take to prevent human error in languages they will always persist. A good language mitigates it either by employing compilation standards within the compiler, and failing to compile the code, or by refusing to run it. C does this to a great extent however not in an overbearing manner.

One thing we need to understand here is that by adding safeguards for crappy code, we are also limiting what can be done with the language to some degree

As far as security goes, I couldn’t agree more. It’s a fantastic trade off. But I have a hard time believing that low level languages are at fault whenever compared to Linux for instance, OpenBSD has only had 2 vulnerabilities in it’s base install over it’s entire life. That of course could also be attributed to far fewer users, but nonetheless it begs the question of how far we should go with guarding languages.

You pretty much just put the fields of Computer Engineering and Computer Science head to head ;). That’s so true.

Anyone can write code. Not many can design and construct the hardware the code runs on. This article pretty much sums up my beliefs:

https://blog.codinghorror.com/please-dont-learn-to-code/

Certainly, people will continue to make errors.

However with a language like say, Ada, the compiler will pull you up on it and not compile, rather than C just compiling it and letting you release insecure garbage…

With regards to functionality - sure, there is sometimes a trade-off but you can put over-rides into the language so a programmer skirting the edges of safe practice KNOWS and has to consciously over-ride… rather than writing buffer overflows by accident, or say, silently using the assignment operator for comparison, etc.

(tip for that, if doing comparisons in C, do say : if 3==a rather than if a==3. that way if you miss an = sign, you get a compiler error. yes, i’ve been bitten by that before, silently, in C… code was legit, the IF statement “worked”, etc.)

3 Likes

Thinking about this a bit more, we need to make security “sexy”.

The big problem though is that speed is something you can benchmark. You can optimise, you can point at the end result and say “Look, X is 25% faster than Y, it is better!”.

Right now, we don’t have any standard/easy method for benchmarking “security”. Other than after the fact, where software application or platform X gets Y exploits per time period, and platform A gets less or more. But there are so many unknowns (how heavily it is targeted, etc.) that you can’t really compare. I’d also argue that it is too late (for the end users) at that point…

Perhaps we need to develop some benchmark test suite(s) for hammering on software with malicious or malformed data, etc. Maybe some already exist. But you certainly never see applications benchmarked with it.

I’m not sure how we achieve this, but the root cause for the shit (in terms of security) software we have today is that there is no easy metric to measure how crap it is by. This is a problem we need to fix so that vendors have an incentive (or a metric they can market their stuff with) to suck less at software security. We’d also perhaps find it easier to hold vendors accountable. If they put out stuff that fails the automated test suite, then that would be negligent…

(a bit of a tangent, but in line with my concerns regarding C being “obsolete” above. people use it for speed and due to familiarity. if they got reliably nailed for the number of security issues they write with it, they may be less inclined to use it)

I remember reading an article on the register where it was cheaper for companies to recover from a hack than it was to install better security due to the cost of overhead. So I think that even if you have the perfect language and experts, all of that means nothing while companies still have a financial incentive to not be secure in the first place.

2 Likes

Yup, and that’s kinda the point i’m trying to make above.

Without some sort of benchmark for them to score poorly on, nobody will bother because marketing can’t push it…