Suck in Linux

Samba has always been a pain in the ass to get to behave well, I understand its almost always some permissions thing where you have to map a Linux user to a samba user… But it’s a pain in the ass, and most of the older GUI tools that just worked ™ are now incompatible due to python versions or systemd vs init.

Alsa and pulse I appreciate, only because I started using linux in 2004-ish and for a few years only had OSS support for my sound hardware.

Getting full duplex (recording and playing at the same time) should not be an achievement, but I recall it being one at a time. Not having to use xmodmap to get the extra buttons on my mouse to work is nice. I sure as fuck don’t miss ndiswrapper, and b43-fwcutter bullshit.

I remember it seemed like lilo used to be the standard, then came the new and fancy grub…which was cool because you could fiddle with it easier because it wasn’t all basically just a binary blob a config tool shat out on the mbr. It also could loopback mount and boot isos and stuff.

I used to be skeptical of efi vs bios because it’s higher level code, much bigger, lots more room for security fuckups… Plus there’s more stuff the os can diddle around with and change in efi than the old bioses. But efi booting is way easier to fix than the older mbr boot stuff was. Being able to have mouse/keyboard/massive drives/onboard hw diag utilities are all nice.

I can say, linux is significantly less janky than when I started using it, but it is by far less than perfect.

If you’re annoyed with grub or whatever, try something else. freedom of choice is awesome, the only cost is the time investment in finding something that works for you.

Tech and software changes all the time, they’re just tools… If they do the job, use them until you find something that does it better. You just gotta weigh out the time investment cost of the stuff. Sort of like the saying if you keep having to do it over and over automate it. I’d something keeps breaking or is inadequate, it may be worth shopping around for an alternative.

Occasionally distro hopping in a VM isn’t a bad idea either.

2 Likes

poof it’s gone now you don’t comply with standards and Netflix and other streaming services go back to not allowing playback on Linux.

HDCP is shit, yes. But supporting a shit standard is not bad in itself, if the industry has adopted it. If we want Linux to have any staying power, we will need to support this crap. You can choose not to use it.

1 Like

Strawman much?

If you look at the list I put up towards the top of this thread you’ll see I explicitly mention “binary blobs (esp. video”. AMD, although it does better than Nvidia, still aren’t doing what I consider to be “the right thing”. The right thing would be for the GPU manufacturers to release the full hardware spec along with each of their cards. Full. Not partial. FULL. Then the open source community could create its own drivers to make the cards work.

The same thing applies to all device manufacturers (esp. bluetooth, wifi and printers).

It is the complete lack of device documentation from hardware manufacturers that makes open source development of performant, reliable and trustworthy drivers difficult, if not impossible. As long as companies expose only a fraction of the capabilities of their products, not only can we not trust those products, but we cannot unleash their full power either.

Reverse engineering shows, time and time again, that device functionality is routinely hidden from view — even when open source drivers are supplied.

So, let me state this plainly so as not to challenge your comprehension skills: I do not support corporate device drivers. Full stop. Open source or closed source, it doesn’t matter. Full stop. I support community-developed open source drivers. Full stop. If hardware manufacturers released full device specs we could write our own drivers and Linux would have just as good device support (if not better) than Windows. Full stop.

It is sad that your view of the world is so narrow that you find such things impossible to believe. I wish I could help.

1 Like

Okay, how is Microsoft adding the bits needed for Linux to work well on azure a bad thing?

And the card will be supported when it’s 2 generations old. Nice!

Well that’s just not the world we live in. Have you noticed, the world is giving corporations more power, not less. I don’t like that aspect of it, but I have no concerns with pragmatism in computing.

You have the right to your opinion.

My view is not so narrow as yours.

See, I believe in pragmatism. I make my system as open as possible, but I don’t care if there are a few blobs here and there. That’s fine. That’s also life.

Now I’m curious. Would you take the option of rejecting all corporate contributions to Linux if it meant we had to rely on reverse engineering of hardware that had no fully open hardware spec?

1 Like

I don’t know enough about Azure to have an opinion.

How long it takes community-developed drivers to be updated depends mainly on how promptly the hardware manufacturers release the documentation for their products. If they released both simultaneously I doubt support would lag by more than a few months.

Once again, you set up a strawman for sound beating. Instead you should be asking the question “How do we reprogram the apathetic individuals of the world to DEMAND full hardware specs so that the world can be a better place?” Weak minds are what allow corporations to seize ever more power for themselves.

No, I set up the realistic option. Corporations will say “fine, fuck it, microsoft let’s us do it our way, so we will just ignore Linux”

Just because you don’t like reality doesn’t make it a straw man

That’s quite the revolutionary talk, comrade.

1 Like

That was the actual state of affairs for a long time at the very beginning, if you were around then and can remember. We managed. The corporations ultimately realised they could change their approach, open up, and empty more wallets. That’s how we got to where we are now. Unbridled greed is evil, but at least it’s predictable.

But let’s suspend disbelief for a moment and entertain your fantasy scenario… If Nvidia/AMD completely withdrew Linux support, then all it would do is enhance and accelerate projects like RV64X… well, umm, “Thanks!” are in order, I guess.

We no longer live in a world where the historically dominant players hold the same monopoly/duopoly powers that they did in the past. Our options are ever-increasing. Heck, even Intel is getting back into dGPUs with the Xe. Fabless has changed the game.

The asteroid has already impacted. Jurassic-era thinking will no longer serve you or save you.

My point is Intel’s Proprietary DPCD backlight driver. Nvidia’s unwillingness to work with kernel maintainers. Or Google and Intel’s inclusion of HDCP DRM into the Kernel. Maybe I worded this wrong, but I was more or lesss trying to agree with what level1 said earlier:

At any rate, my idea of corporate subversion of the code-base may not align with yours or even his.

1 Like

Seem like some of ya folks don’t know the full story behind Uni of MN. So you know they contributed hypocrite commits (ie: things that solved some issues, but also introduced other bugs, intentionally). For security research purposes, that’s not a bad thing to do. However, the way they contributed is what made them get so much retaliation.

So, some hypocrite commits were committed without informing any of the kernel devs and especially without getting consent from at least 1 of them (they could have asked just Linus or just Greg Kroah-Hartman for permission and test the other devs), some commits were accepted, but whenever they got a reply that their contributions made it, they immediately replied back that “hey, that’s actually not good, here’s is the actual commit, replace the bad one.” I would bet that if they had asked for permission first to do the study, they wouldn’t have gotten banned university-wide, if at all.

Usually a company asks a pen-testing team to come in and test a part of their system and have clear restrictions on what can and cannot be penetrated. What UMN did is equivalent to breaking into a company’s systems and then afterwards posting a public statement for everyone to know, including the company with “hey, this firm has a vulnerability here, here and here.” Sure, it is “white hat,” as in, they didn’t abuse it for their own benefit, but they did not ask permission to do that, so it kinda makes it a case of grey-hat cracking (as in, a grey area). Keep in mind if you do that to an unwilling business, you may do them a favor in some way, but on the other hand you may bring some negative publicity to them, with the worst of the worst case scenario, going under because of your stunt (albeit that is kinda their problem).

University of Minnesota did not allow for the hypocrite commits to be merged in the kernel. But the way they approached things was wrong and they got a really deserving backlash for it (ie getting banned from contributing).

Aside from that, I also think their methodology was flawed. Being a “trusted contributor,” their commits are obviously slightly influencing on the kernel developers. I am almost certain that if they did the hypocrite commits from random gmail accounts, they would have been more in-depth checked. Also, their response was stupid. The kernel devs apparently sent them a private letter with the steps to do in order to get unbanned (around Friday), but they replied Saturday or Sunday with an apology (more of an “we’re sorry we got caught”) letter, hoping to get unbanned.

Took a minute to find it, here’s the Linux Kernel Mailing List from Greg KH:
https://lkml.org/lkml/2021/4/25/146

I don’t believe what the UMN did was entirely wrong, but the way they approached it was definitely wrong and they got what they deserved. But seems like some people make it appear as if they sent malicious code in order to exploit some systems and not commits in order to research the security / code checking capabilities of the linux kernel maintainers (however flawed the study was).

Now, the kernel maintainers have to go through the previous commits and make sure nothing malicious made it in the kernel (you don’t know if the security division researching the patching process was the only group from the uni to do shenanigans, there could be others who may have contributed malicious code for “study purposes”).

2 Likes

Anyone interested in some fuel for this fire can read the excellent “Major Linux Problems on the Desktop” by Artem S. Tashkinov. While I disagree with some details, most of the things he brings up are valid criticism that needs to be addressed at some point in the future. I see it as the unofficial system bug tracker for Linux desktops as a whole. Not perfect, but good enough. :slight_smile:

So the latest “stable” mesa borked my gpu (5600 XT) and now I have to wait for a fix. Arch users fixed the issue and sent out a mesa 21.0.3-2 in about 2 days, Fedora, the distro I’m using, sent out the bad version on the 24th of April or so I believe and still has not sent out a fix…

Other things that grind my gears…how come most desktop environments still do not have a decent compositor?
I can’t recommend distros to normies without a decent compositor. If they so much catch a wiff of screen tearing they will immediately start to complain. Also distros without a proper upgrade path…seriously, peppermintOS and Zorin (dunno if the is still accurate) but what is the point of installing a distro without an upgrade path? I can’t recommend a distro to normies that comes with an expiration date without renewal.

Because the people writing desktop compositors usually have a very limited grasp on how graphics work on the low level.

Tearing happens because you are updating the frame that is being blitted to the GPU at this current moment. This can still happen with double buffering or triple buffering techniques if the frame updater code is slower than the screen refresh rate, and it will happen much more frequently with bigger resolutions and also a much higher frame refresh.

Let’s say you have one of those fancy 4k displays that output at 144 Hz. At 144 updates per second, that is around 6.95 milliseconds to draw a single frame. Put differently, that is 6.95 milliseconds to transfer the full width of the system graphics buffer, which is 24 883 200 bytes (23.73 MB) large. Usually, you would want some signalling overhead as well so that 6.95 ms in actuality become something like 6.25 ms for a single screen.

Now, also involve the V-Sync, which happens when the screen is like, 0.5ms from starting to render your picture. Awesome stuff! :slight_smile:

Now try and write a compositor that can adhere to these strict requirements, and you too will understand the challenges imposed here, a compositor is nothing more than handling a bunch of 2D texture maps. With client-side controlled borders. It’s a fun little weekend project in Vulkan if nothing else. :slight_smile:

Chrome OS’s Ozone does an excellent job and is open-sourced. Perhaps, forking that one would be a start.

https://chromium.googlesource.com/chromium/src/+/master/docs/ozone_overview.md


Can work with Wayland too.

Feature, not a bug

This is a big problem, and one something that gets popular will always face.

It’s easy for a company to influence, or even entirely set the direction of (many) community projects, or even the entire ecosystem by means of money (hiring devs, and the people sitting on the boards everywhere).

This is part of the subversion part. RedHat flexed its influence on the community, with the result you witnessed. The reason, of course, was to drive systemd adoption.

This is just the corporatisation of FLOSS at work.

Mostly a problem because people don’t understand the UNIX file hierarchy and at quite a few points there have been attempts at “fixing it”.

Insert XKCD about competing standards here.

GNOME only cares about GNOME, that’s why adoption of GTK3 was so slow.

Ever since GNOME2 DEs, especially the ones driven by major corporations, have just been chasing whatever the latest hype in UI design is. So instead of building on what they have, they just redesign everything every other year or so.

Consequentially I’ve ditched DEs for Fvwm, which I’ve set up to suit my workflow, and mine alone. And it’s been working great for the past 20-ish years.

Many of the newer developers come from the MS world, and didn’t leave their baggage at the door. Systemd is a prime example of this with its “need” to do literally everything in a single application.

I’m getting old, I guess. I still have a hard time considering Ubuntu a “serious” distribution past its ease of use for new users.

If an init system “is” Linux I’d argue we have a serious problem.

I imagine all the Grub hate is going towards Grub2? Personally I think Grub 1, with its weird but actually understandable configuration was miles better than either.

Also whoever decided to start numbering things from 1 in Grub 2 needs to be kept away from computers (and/or math).

You, on your own, do not have the ability to skew an entire project’s vision. If your contribution is considered unwanted, for whatever reason, it’s not going to get accepted.

This is quite unlike large corporations that can just basically take over projects by hiring all of the major developers, or just start projects that they control entirely (something RedHat is fond of doing) and that tend to outright die once the corporate support dries up.
Hell, RedHat even has the audacity to just outright declare projects dead because they lost interest in it, not even any consideration that others might want to continue maintaining it (see Spacewalk)

The reason for that seems quite logical, it’s a lot more time consuming to reach consensus in a “real” community project and most corporations don’t want to spend that time. So yes, it makes sense, but it’s also very damaging as contributions aren’t scrutinized to the degree they would, and arguably should, have been in a “pure” community project.

3 Likes

The biggest problem with Linux is the glut of old / outdated information and a lack of good curated documentation on simple system maintenance things.

The arch wiki is okay, but it has too many cross links and way too much info on each article. Reading man pages can be a pain and finding out fixes to obscure issues with dmenu and systemd can both be fairly difficult.

Finally, the community answers the same questions so much the real experts get bored / annoyed with doing it and eventually give up.

Our best solution is to put out documentation reasonably often and make sure it’s notated what version of linux it’s for.

3 Likes

Don’t be perfect, just be good enough

2 Likes

What you’re referring to as Linux, is in fact, GNU/Linux/Systemd, or as I’ve recently taken to calling it, GNU plus Linux divided by Systemd.

3 Likes

Throw in logrotate to keep things civil maybe? I’d be more worried about all the Canonical spyware in Ubuntu though? hmm, not sure.

For the past year my development work has been on Mac (all data gets back to my FreeNAS box), and my current iMac is on it’s 7th-year death bed. Chances are I’m moving to a mix of Win10/Fedora for the time being; or alternatively Win10 + my actual code running in a VM on my Dell PowerEdge.

The latter would involve running vim/VSCode/RubyMine etc on the Win10 side, but the code/files would be executed upon via the VM-hosted linux instance, and the data will sit on my FreeNAS server. Need to ponder this one further though.

1 Like

Hahah, love this! Well my personal code never gets peer-reviewed, so chances are yeah :wink:

1 Like