LTT 1 month Linux Challenge thread

I already have a good capture card that supports Linux. I don’t need to buy another.

As long as you seem like you actually want to learn and you’re not just ranting because things are different to Windows this hasn’t been my experience of the Linux community at all.

Well, Linus’ point is to get those people angry again but on what he deems to be the focus for someone who’s completely new. The fight over pacman vs apt is never going to end, nor is the bitterness people feel about developing for Electron where everything’s a webapp.

1 Like

Well that fight is easy, DNF wins :stuck_out_tongue:

1 Like

Not if you hold packages. DNF breaks even further if you hold packages.

Never experienced issues with it but granted I haven’t held all that many packages in my 3 years.

Had to do it all the time for Nvidia and Blackmagic proprietary blobs last time I tried Fedora 27.

if you dnf -y install foo -x nvidia* -x kernel*
You are pretty much good to go. Now you can even have the x86 + i686 packages side by side with no issue (Hello gstreamer ! ) I do think apt and pacman should have protected packages though.

You can exclude packages globally in /etc/dnf/dnf.conf without having to do that for each package. There’s also a dnf plugin that does it for you I think.
If that leads to other issues I don’t know.

1 Like

Sure, but in the example I listed, You wouldn’t expect to excluide the kernel and a graphics driver for more than a test period of time. Having to edit dnf.conf and then change back wouldn’t make any sense.

i think that people don’t really understand what Linus and Luke are trying to bring over.
What they try to show is how the journey would be for a new user switching over to linux.
I think he repeated that many times in his video’s.
The key to show the out of the box experience with linux and hurtles that a new user,
would likely run into when using their systems like they used to in the Windows eco system.

3 Likes

Yeah, that’s where my learning curve led to, cause there wasn’t an equivalent to apt-mark on Fedora.

The issue is there is no LTS kernel to stay on a specific Kernel revision and not have it push to the next major revision. This is a nightmare on both Nvidia and Blackmagic proprietary drivers.

There actually is, it’s dnf versionlock.

Well there is, and that’s simply holding the package :stuck_out_tongue:

That said, if you want actual LTS (and not just the Kernel), that’d be CentOS/Rocky/Alma/RHEL.

Running LTS Kernel on Fedora is the same as running LTS Kernel on Arch. It’s just the wrong distro for the use case.

1 Like

Then yeah I prefer to use 20.04 LTS with my systems for the lack of Nvidia and Blackmagic breakage.

I would use Fedora for a All AMD system like the STRIX G15 AMD Advantage edition.

It’s unfortunately the only AMD Advantage version laptop for sale in Canada, nothing else, that’s it.

Windows 10 SKU:

Windows 11 SKU:

Sure and that’s perfectly fine. Every distro has its usecase or they wouldn’t exist.

Going back to the start of the whole thing though, the Kernel not really being available as LTS on Fedora (although I would wager there’s COPRs to fix that) isn’t really an issue of the package manager.
And holding packages is certainly possible in various ways, so on that end DNF isn’t inferior to apt either.
I just personally had less issues with DNF so far then I had with apt in the past when I tried Ubuntu, it just feels more stable. And the Spec-Files for RPM also seem more thought out then the deb recipes or pacman script’s I’ve seen. But maybe that’s just a matter of perspective.

1 Like

BTW, Let’s actually talk EposVox and Garuda. Would be a nice change of pace for once.

The main reasoning I avoid rolling releases is because my daily drivers need to be stable for a long time. I’ve used 18.04 for 1 1/2 to 2 years without issue because I set my expectations correct for what I want to use it for.

I’m glad he changed to Kubuntu, that will help a ton on helping get more stable. And Nouveau being trash was the first thing he encountered trying to boot the installation media… yeah…

Unfortunately the toxic “user error” people are exactly the people Linus is trying to make angry with this series. And Adam’s got a point that it will never be the “year of the Linux desktop” because of people like that, but I’m so glad Linus is getting those people out to argue because sooner or later they need to realize they themselves are halting progress.

1 Like

Eh, I do understand that, I just profoundly disagree with it. As a new user myself I find it insultingly patronising that so far his difficulties in emulating my experience have involved not reading what the terminal has said and blindly deleting his desktop, for example. I would honestly prefer it if he held his hands up and said “sorry I’m not actually a tech expert” rather than this masquerade of manufacturing problems.

3 Likes

Rolling release distros can be very stable, but there’s also common gotchas that will break your system. That’s why generally I would call rolling release distros “advanced” or “not beginner friendly”. There’s ongoing effort to make beginner friendly rolling release distros (like manjaro, garuda), but imo they just aren’t to the same level as things like fedora, ubuntu, or mint in that regard.

Rolling Release vs “Stable”

It looked like Epos Vox ran into dependency hell. The traditional linux shared library files causes you to have to keep all of your application’s dependencies in sync. If you update and there’s something not in sync then things are going to break.

On “stable” distros like debian there’s a lot of work that goes into trying to keep the packages in the core repo working together so that a user can’t break the system on update, however, sometimes things just are out of sync and they can’t really do a whole lot about that. The package manager should detect that though and not allow the update without removing packages or what have you.

On a rolling release system there isn’t the same work put in to keep packages working together. The idea is that the packages should always use the most up to date libraries. For very common packages that are in the core repo this works out without many issues, and most of the issues are usually a bug in the package itself (which is why manjaro holds packages for a week before introduction, but that has proved to cause its own problems). Even if you’re just using a small set of applications it’s still advised to check the wiki before updating for any known issues which can be a hassle, but if you do it, you will likely have a rock solid system. And again, in theory if there’s any dependency issues the package manager should detect them and prevent the update or install or what have you.

Garuda

Now when we get into all of the packages that garuda installs… well honestly it looks like a ton of stuff and I haven’t really looked through it to see what all is there. Like Epos Vox mentioned in his video a lot of it stuff that’s sane for a gamer to have and in theory should provide a good experience. The problem is that it seems to be installing stuff from even the aur, which is much more likely to have mistakes in flagged dependencies and whatnots that can cause an upgrade to go through even though the package can’t use a new version of a dependency. AUR packages are maintained by the community and aren’t maintained by arch. If you know that what you’re installing is from the aur then it’s a little easier to maintain it, but an experienced arch user knows that there is a maintenance burden for your system anytime you’re installing aur packages. Garuda attempts to treat the aur packages as first class citizens which is good in the sense that it means there’s more things there you might actually want to use, but bad because aur is not a first class citizen.

Epos Vox mentioned that there were gpg key issues during the update as well. This is a fairly common thing to run into with aur packages. The idea of gpg signing packages is that you can ensure that it came from someone you trust. Generally this means doing an import on a gpg keygrip given in the aur package file. Ideally when you do this the public key is on whatever key servers you have gpg configured to use and the key downloads and you can set the trust level. These being community maintained packages that just isn’t always the case. Sometimes you have to go find the gpg key yourself to import it and set the trust level. My guess is that Garuda ran into this issue and couldn’t get a key resulting in not being able to update a package. Then when that package isn’t updated and everything else is issues can pop up, which it seems they did for Epos Vox.

This is imo entirely a Garuda issue and not something Epos Vox did wrong. In fact I think most people would find it hard to fault Epos Vox on what he did, as compared to Linus where there’s definitely some debate. Granted, I personally think that the things that Linus did were reasonable for someone with his knowledge level. Keep in mind that he’s not an OS/software expert and is instead a social media expert that is a tech enthusiast and what expert level knowledge on tech he does have is more related to hardware. Since he’s a tech enthusiast I would put him squarely in the “knows enough to be dangerous” category when it comes to doing things on an unfamiliar platform. The point though is that I might be more forgiving of a user than many people and I’ll own up to that.

Anyways, Garuda is trying to simplify the installation of all of these gaming related things but didn’t properly detect and rollback issues in the update process, issues that are caused by the way it does things on top of arch. Garuda is promising, maybe even more so than manjaro ever was, but it still has some growing up to do before I could actually recommend it to a beginner. I personally think that if it wants to treat things from the aur as first class citizens it needs to instead maintain its own repo containing built versions of the things in the aur it considers to be essential so that it can ensure things like the gpg issue don’t come up. Maintaining their own repo won’t fix everything, but it should still increase stability on updates substantially.

The future of user friendly rolling release: Steam OS 3.0

Now to switch gears a little, steam os 3.0 is going to be interesting to look at once we have access to it. One thing that they recently talked about is that they will have an immutable root file system (not sure if that’s steam os 3 as a whole, or just the steam deck).

I think this is a great idea because what they’re doing is making the steam deck an appliance. To ensure there aren’t issues during updates and whatnots they’re making sure that any core system configuration that is done, is done in a way they expect (like through steam instead of /etc files). Now sure you could still do user space configuration they don’t expect in $HOME/.config but at no point will you ever have a system that can’t boot because they’ve ensured the base os works. If you have user space issues it can boot into a system user/safe user that doesn’t have any configuration applied to it other than what is in the immutable file system.

It also prevents users from installing more programs to the root system which helps to avoid dependency hell like mentioned before. This means that when the steam deck updates it can be much more confident that some random program the user installed isn’t going to break the core system.

They also don’t intend this to prevent users from installing any programs, who would want to use an OS they can’t install programs on? Instead the expectation is that non-core packages are installed through something like flatpak. The advantage to this is that all of the dependencies of a flatpak application are self contained, so updates to the core system shouldn’t break these applications. In my experience using things like flatpak for non-essential applications on a rolling release distro greatly increases it’s stability during updates.

My hope is also that valve does some kind of hold on packages in its core repo (somewhat similarly to what manjaro does). I suspect they do plan to do this with some kind of automated testing that tests the state of adding the next wave of releases to the core and if it succeeds then they would add those packages in thereby allowing steam decks to update to a known good state. However, I haven’t seen anything talking much about how they’re maintaining packages so I could just be projecting my personal desires here.

My closing thoughts on rolling release stability

So yeah, in my opinion a rolling release distro can be stable. If you’re running arch the best way I’ve found to maintain a stable system is to:

  1. Check the arch website for known issues before updating.
  2. Limit programs installed to the core os.
  3. Try to use flatpak for applications that aren’t related to core system functionality.
  4. Prefer flatpak over aur packages.

Now to be fair, I do not check the wiki before updating. I figure the likelihood of me updating while there’s an on going issue is pretty small and even if there is an issue i’m experienced enough that it’s probably easy enough to fix after the fact. That and I’m privileged enough that even if I have something critical to do I can just do it on another computer in the worst case scenario.

I also realize that using flatpak defeats some of the purpose of a rolling release distro, but you can usually get up to date flatpak builds of applications and for me the rolling release part is important because of updates to core system packages more so than any individual application. Also, what apps I use flatpak for usually comes down to more of a judgement call of how likely I think the app will be well maintained. So I definitely do still install things to the core os that aren’t essential/core system functionality.

In the end though I do think that to have a stable system on arch the user must know what they’re doing and manage the system properly. This does come down to the idealogy of the distro itself and I personally want arch to continue operating like this. For most people doing the things to maintain a stable system is probably too much of a burden for them to enjoy arch, and that’s ok.

The thing is, I’m also really excited about the future of user friendly rolling release distros. I don’t think that the things that cause arch to not be a user friendly distro are insurmountable by arch based distros. I actually think that how arch operates means that in the future arch based distros will be just as popular as debian based distros once the tooling around managing an arch system improves. It’s just that creating tooling to manage an arch system is hard due to the flexibility of arch, so that tooling is still evolving and isn’t amazing yet.

I could even see the majority of consumer computing being done on rolling release distros at some point in the future. i do think there will always be a higher demand for debian and red hat based distros in the enterprise though due to being able to get support from companies like red hat and canonical. The way that those distros are maintained is also much more fitting for business ideology. It could be thought of like windows enterprise vs windows everything else. Consumers generally prefer new features and will sacrifice 0.1% up time for that, a business on the other hand could lose millions of dollars from that 0.1% uptime difference. So windows enterprise is months to years behind windows everything else.

TLDR

  • “Stable” distros like debian are more stable than rolling release for most users right now.
  • Epos Vox did nothing wrong, his issues were valid and caused by the distro.
  • The way Garuda supports packages that it seems to consider essential is flawed and will hopefully be improved.
  • Steam OS 3.0 sounds promising from stability and user friendly perspectives due to some fundamental design decisions even though it’s based on a rolling release distro.
1 Like

Interesting point on SteamOS 3.0, but then comes the question does QEMU work at all in a flatpak/snap? If it doesn’t, and the root directory is locked, wouldn’t it make more sense to put everything in /usr/local rather than /usr like MacOS is?

Flatpaks and Snaps that depend on libGLX also have ENORMOUS amounts of problems on Nvidia systems.

I don’t think qemu itself could work in a flatpak or snap, it’s too dependent on the system. Maybe it could work in an appimage or with sandboxing disabled on flatpak/snap, but even then it’s a bit of a stretch.

As for installing to /usr/local, that’s possibly something steam os 3 will allow you to do. We don’t really have a ton of details on it so it’s hard to say. /usr/local is still a shared directory so It may be that steam os 3 wants to completely isolate users and you would have to install stuff like that to $HOME/.local.

I didn’t realize there were libglx issues in flatpak, but that also doesn’t surprise me because nvidia… do you have any articles or anything talking about it, I’d like to read up on it and see what’s going on there?