Okay, I am just going to start by saying I am no way a Linux Expert and I don’t know a way to calculate when i’ve reached that point but I would like to preface this with saying " I don’t know everything, but I know what I do".
… Okay let’s proceed with the after midnight rant while drinking wine
I do not like Ubuntu in the enterprise and I will make a bullet list on some of my qualms.
I feel as if the updates are curated not nearly as well as they should, given that a company is behind the distribution now.
The ubuntu repos seem as if they don’t segement what is a feature update and what is a security update
*snap packages (i’m too drunk to explain but if you know, you know.)
*Netplan I hate and I think it is over complicated not because I don’t know how to use it but because I don’t see a benefit of it in the enterprise.
I don’t hate on corprate distributions on GNU/Linux as I think that Redhat does serve a big spot in the enterprise and I actually like using it as I feel that RHEL does still provide some useful technologies that are helping the platform overall.
However,
I don’t feel as if Ubuntu provides anything with Ubuntu server other than lack of stability.
Also let me say that I WOULD run ubuntu desktop but I would never choose to run Ubuntu Server in the enterprise if I had the chance.
This is not me crapping on Ubuntu in the enterprise as I think other options do need to exist but I do wish that it was a stable platform such as RHEL or Debian. Interested in your thoughts not necessarily your debating so if you have reasons that you choose Ubuntu in the enterprise please by all means let me know and I may learn something. However if it is argumentative, keep it to yourself please.
I think the big deal-breaker for me is snap and netplan as well, and yes, it’s because I don’t know how to use netplan. I don’t want to learn netplan.
We already have two great* options for network management:
Systemd.
Sure, bitch about feature creep all you want, but you can’t deny it does a good job.
NetworkManager.
It’s been around for ages, and I cannot find a single reason to dislike it. It’s well-supported in automation tools like Ansible, it’s easy to configure programmatically, and it’s been rock solid for ages.
And if those are too much for you
roll your own network.
Seriously though, who wants to loopmount filesystems to launch kubectl? Any takers?
As for Debian, complain about it’s sluggishness to adopt new things, but it’s one of, if not the, longest living Linux distros that isn’t managed by a for-profit corporation. It’s a really impressive project and it’s damn stable.
I’ve recently moved from Rocky Linux to Debian on my servers and I’m very happy I did, everything just works.
that said I can’t recommend it for Desktop. The package availability and recency is a bit lacking on some desktop applications and that’s a problem for me.
I have Ubuntu Pro seats professionally, and have been reducing our number in service since Trusty. New systems entering production are Debian Stable with backports.
I am not a fan of snapd, and especially not a fan of how Canonical is handling the transition within the core system to snap-first. I don’t like seeing Canonical-pushed ads and news in my command line. A history of decisions like that has broken my trust.
In contrast, Debian is designed to be a good upstream, and I feel that’s what they are. That provides value to me and my organization, not the latest ultra-scale Kubernetes also-ran.
yes, it’s because I don’t know how to use netplan. I don’t want to learn netplan.
This is a very real factor for me as a Linux bittervet. I still have misgivings about systemd, but I have adopted it into my toolkit because it’s extremely relevant and does some awesome things that nothing else does.
I’ve begrudgingly embraced Wayland and Pipewire because it does some novel things well, too.
I am not against change and progress, but I think there’s something very very wrong if I need to learn a new fucking subsystem every LTS release or else things that work stop working.
It’s enough mental overhead that I’ve got OpenBSD-based skunkworks projects. Give me one good reason to do it, and I’ll rolling deploy Tux out of my misery… grumble
Testing does not receive security updates, and should not be used as a daily driver. If stable + backports isn’t current enough for you, sid is the recommended target.
I completed agree with you on NM I think that it’s robust and stable. I’m finally happy that someone isn’t shitting on Systemd as well. I think that Systemd is amazing and should be a standard as I like having some level of similarities across distributions. I do use Debian Desktop but for my work normally i’m just working on documents or remoting into systems, I don’t need the latest and greatest but if I need something I can compile it from source.
Here’s my analysis: the lack of security updates for testing is a distraction, because highlighting and prioritising security updates don’t make sense when that edition of Debian is all about bug-fixing. That’s bugs in the sense of the Linux Kernel where a bug is a bug even if it has security implications. An example would be the xz-utils package, which was replaced before I realised I had the exploited edition. If you to trust the security channel for stable, you already trust the process which uploads fixes to testing and some of these will have security implications and expedited fixes.
If you want newer editions of apps, Debian Testing is going through the churn of preparing for a freeze-and-release so there’s updates every day, and some will have security implications. If there’s a flaw to Testing, it can have flag-day type breakage where a large platform migration happens or a swathe of packages come through and there’s a loss of compatibility, the kind of unintended consequences that cause large crashes and recovery. (My approach to this: don’t update for a while after the release freeze has ended.)
I can live with these complications and have lost track of how many years and Debian releases I’ve done this. It might not be for you.
Feel free to make a thread for it; this thread is clearly titled.
Yes, but there are vast differences.
If that works for you, awesome. It doesn’t resolve the package availability issues I ran into last time I gave debian a stab, so I just chose not to use it.
Fo the uninitiated, I always say to run Debian testing since it is the best of both words between a rolling release that also gets security updated. Just do not update during the stable transition.
If you want to be like me, run Debian Unstable/SID. It is like a rolling release with no security updates since it is mostly pulling from upstream. It is much more stable than you would think and I have been using SID as my dayly since ~2007.
Unless this has changed in the last decade, it used to. And you could always use the stable branch to pin the security updates for critical patches and versions that you need. But year, SID is much better, but you just have to pay attention to the updates and upgrades in order to not break your system.
It is based on Debian Experimental branch, yes, but that is about where the similarities end. Canonical chooses to do config path customizations and a myriad of other things that makes it hard to even used Ububtu debs on other Debian based systems. It can sometimes be a nightmare and in general the Debian community recommends that you not mix and match the streams.
Roger that. I must have misremember, Like I said, I have been using SID since 2007 so. No big deal here. But Yeah, SID is hella stable compared to rolling releases of other distros and you don’t have to worry about the staging period right before the new stable releases that can break testing.
I think we can all agree, Debian makes a better server distro than Ubuntu unless you need something specifically that Canonical offers.
Switched from Ubuntu to Debian on the desktop a couple of days ago and loving it. Depending on how you roll, I would do stable + backports for enterprise desktops, and testing for home desktops I do not personally use as a daily driver.
For testing, I would roll with testing and then once testing is stable, stick with stable for three to six months before moving on to the next testing. The reason for this, is that testing always has some shenanigans going on the first few months when they try out a few changes.
Last time I tried a testing desktop, I ended up right in the middle of major changes to PHP, where they finally cleaned up their PHP folders. This unfortunately lead to a ton of breakage for two weeks, which was less than ideal when I was a PHP dev at the time. (this was around 2014)
So… Use the release codeword (trixie is the current testing) rather than testing, and change that to the next testing (forky comes after trixie) whenever 3 months has passed after trixie release. Best way to stay sane.
OOooooorrr… If you are okay with your system taking a nose dive every 24th month or so, go with unstable.
An additional annoying thing Canonical does is updating LTS versions when installing an old LTS version (for Remnux compatibility, for example).
It’s during the installing updates portion of the installation. I was not aware security updates included changing the release candidate you were attempting to install to the latest one.
It’s very annoying finding that out after the system boots.