What's the big deal with systemd?

Just because we have gigabytes of RAM now, doesn’t mean we should be wasteful with it.
We throw resources at problems instead of streamlining computing. And that’s wrong.

1 Like

Depends on which one is cheaper, and how critical the application is.

Average system where memory is cheap; systemd.

Mission critical system to be shot into outer space; something else.

This quote is telling of the attitude about technology. Nothing against you, just about everyone feels this way. I just think that it’s ultimately short sighted because whatever was written will have to be re-written in the future, and that will probably be re-written with the same shoddy paradigm.

That being said, I think that as we approach some hard limits like nanometer lithography limitations, we might see a revival of efficiency and tighter code.

1 Like

I love well optimized code as much as you do.

The reason why this attitude exists is because now that resources are cheap, not to mention IaaS, there is not a great desire to continue to refine code just for the sake of performance where performance is not critical. Because the programmer doing that would cost the company more money in time and labor than if it was shipped already. Everyday not shipped costs money for the project.

I agree with this. Now that hardware has reached its fully optimized point, any further gains would need to be done in software.

3 Likes

i expect intel too manage to pull one final trick out of its hat now that AMD has a competitive cpu again.

Only thing really limiting speed wise besides software is just moving data in out of a chip fast enough

1 Like

Either that or it will shift to being forced to use specific hardware for specific applications.

I.E - Needing the latest intel cpu to run photoshop because intel ME says you do.

2 Likes

would not be surprised for that with intel and you could get some really good speed out of specifically designed chips that were just overcomplicated asics.

Well, if they came out for specifically designed chip to accelerate photoshop or cad applications as a co-processor I could see that starting to be a thing again.

that would be really cool but you still run into a issue of how to get data in and out of a cpu

I believe we may see the first dual-socket non-business grade motherboards come to market.

2 Likes

It contains everything, INCLUDING the kitchen sink! Have you looked at its size, lately? We exceeded 15 million lines of code several years ago and we are rapidly closing in on 20 million lines! From a security perspective, this is insanity.

That isn’t what I am suggesting, at all. Thompson and Ritchie were inspired and the last thing I want to do is come across as diminishing their work. But on top of all of their well deserved adulation they are virtually worshiped as prophets of the religion of, “Do One Thing and Do It Well.” All I’m saying is that they didn’t have a choice in the matter and that it was years later, when RAM was (more) plentiful that people began to both appreciate and wish to emulate this elegant approach to the art of coding … as well they should.

So in a nutshell, the developers of Unix, were not the originators of “The Unix Way.” It was simply THE way, because at that time, there was no other option. The machines of the day didn’t have enough RAM with which to be sloppy, even if they wanted to. Now, devising an elegant way to feed the output from one program into the input of another program … that is but one example of why we are so indebted to Thompson and Ritchie.

Yes it’s a large project, but it’s a lot more modular than it used to be. If you were to look at the kind of hardware where LEDE can run on, some of those machines have 32MB of ram, and there’s people further customizing it down to fit and run on 16MB boxes.

Granted - it was possible to run Windows 98 and office and some directX games on 16MB of ram back in the day… but there’s so many more other things you can do now with a computer the trade-off is IMHO worth it, and there just isn’t that many of these low ram machines surviving.

When it comes to looking at bloated-ness from a perspective of complexity of human understanding, code quality and security, I happen to work at a large company with lots and lots and lots of code in various languages produced by lots of people (way more lines of c++ than the kernel, and we do have a small team of kernel devs as well). As a result of a lot of people contributing, and the desire to not have things broken or insecure, we do a lot of automated testing, a lot of static and dynamic analysis, and a lot of automated code rewriting. The kernel is lacking in these areas, but ironically that makes it relatively easy to work on for people who are developing drivers and only care about their own microcosm of things that relate to their devices, and just want to send something upstream.

If you were to start enforcing a higher bar for the quality of kernel code, without providing the tools (including education and documentation) to make it better, my gut feeling is that the rate of submission would drop and there’d be a lot more forks and dkms modules that don’t build.

What could help security is compartmentalization - creating islands of code within the kernel as sub projects, tracking their code quality individually that would also allow for improvement of the quality of code of those parts individually. If a thing like this were integrated with the build system, as a rare breed of actually talented software engineers working on security, you’d actually be able to get useful insights into where to focus your attention

1 Like

Necessity is the mother of invention.
I choose to believe that it was more philosophical than pragmatic… but whatever.
Never meet your heroes.
Insert your cliche’ of choice here.

systemd-analyze blame lennart.poettering

1 Like

Oh, and I remembered this very funny bug caused by systemd years ago.

Apparently the linux kernel command line “debug” option was at some point also triggering the debug messages from systemd and that could potentially flood a system, making it unable to boot.

Kay Sievers response: “Generic terms are generic, not the first user owns them” [1].

Now I’m finally starting to understand the systemd hate.

[1] http://lkml.iu.edu//hypermail/linux/kernel/1404.0/01327.html

1 Like

@Vitalius
What are your thoughts on this:
https://www.gnu.org/software/shepherd/

1 Like

That’s very cool. I did not know that existed. Though I guess it makes sense. Linux isn’t the end-all-be-all GNU system.

Just found that out on Wiki :frowning:

Shepherd takes some inspiration from systemd, another recent init system, in supplying user space functionality asynchronously as services, which under Shepherd are generic functions and object data types that are exported for use by the Shepherd to extend the base operating system in some defined way. Core to the Shepherd model of user space initialisation is the concept of the extension, a form of composability where services are designed to be layered onto other services, augmenting them with more elaborate or specialised behaviours as desired.This expresses the instantiation-based dependency relationships found in many modern init systems,making the system modular, but also allows services to interact variadically with other services in arbitrary ways.
Shepherd also provides so-called virtual services which allow dynamic dispatch over a class of related service objects, such as all those which instantiate an MTA for the system. A system governed via the Shepherd daemon can represent its user space as a directed acyclic graph, with the “system-service” − responsible for early phases of boot and init − as its root, and all subsequently initialised services as extensions to system-service’s functionality, either directly or over other services.

@Vitalius
Does that mean that Shepherd sucks as well?

I’m not sure to be honest.

That sounds like a good thing as that is ideal. Where service interaction isn’t strictly dictated by what the PID 1 does, or is.

But then it also says:

The question is: Does my service have to be explicitly designed to work with Shepherd? Or not?

To get my program’s full functionality, do I have to require Shepherd if I design my program with it in mind?

That’s the decider. Even if the answer is yes, that’s actually fine as long as Shepherd doesn’t maintain such a large majority of adoption that all other PID 1 software is basically ignored in favor of it.

Again, the way these things are designed is actually fine and technically useful. It’s how the community handles them that makes them a bad thing.

If GNOME said “we’re gonna require logind as a dependency” and every distro maintainer went “then guess you’re not being maintained on my distro”, it’d have been fine because then GNOME probably would’ve backtracked, or it wouldn’t be available on any distros without tinkering, which means most people wouldn’t use it.

That was the catalyst for systemd gaining such a large adoption rate so quickly. That something major and everyone used basically said it was required.

If no one ever does that with Shepherd, it’s probably fine.