Hello, kindred spirit.
Packages installed as above is a retarded metric to use.
If you were to compare to windows, go count the DLLs you have, because a lot of the linux equivalent(s) are split out as packages.
Windows just doesn’t split them out as packages, they’re all “part of windows”.
Ditto for macOS - there’s a huge number of “packages” that aren’t listed as packages because they’re included in the base OS.
Comparing install foot print in terms of disk space (or memory consumption), Linux is much, much smaller than either macOS or Windows. It does a lot less out of the box too, i don’t think any of the major platforms these days are particularly bloated.
Let’s say you strip everything to the bone. You might recover a gig or two of space. Except that the one time you want to do X which needs package Y then you need to fuck around installing it rather than just doing your job.
If you’re chasing every gigabyte of disk space in 2019 you’re sweating the small stuff - there are far more relevant things to concern your time with.
Also, the irony is that a package that has more dependencies than another is probably “less bloated”.
Because rather than re-implementing the wheel, it is re-using code from libraries that are shared with other applications.
I could (compiling my own shit from source) statically link everything on my system and reduce the number of packages i need drastically. Will that be less bloated?
No. Because i’ll simply be including version X of every single library inside of every single application that needs it. Rather than every application on the system sharing a common install of it (installed via a dependency).
Come update time, rather than updating one package (the shared library file) i’d need to update every single application that statically linked it. Every executable would be far larger due to including all of the shared libraries inside of it.
Which would you prefer? Update a 60kb library file, or 15x (or 20, 50, whatever) 8-100 megabyte (or more) applications that include it internally via static linking?
Canonical’s rush to embrace SNAPs is an example of this, and something that I don’t think bodes well for Ubuntu (on the ‘bloat’ front). Every time I
df and see the list of
/dev/loops increasing, I shudder.
For a desktop use yeah, but for a production docker image built with Alpine then yeah strip mine that fucker.
Had a couple more pressing problems (bad programing and a email server issue) come up and couldn’t get back over here for a couple days, but wow this discussion blew up. At least some of it seem useful.
However, I am still looking for some way to keep software files from being installed in the root partiton. After I post this I am going see if I can find a package manager that will make symbolic links were needed, but place most files in the spot I chose. For how diverse the Linux community is I can’t imagine I am the only person that has thought of this simple solution.
Yea I am likely going to buy a Unifi Cloud Key, or set it up on something like a Raspberry PI. Part of me hopes they include Unifi devices in the new UNMS. I have 3 Ubiquiti based networks, and since I already rent a VPS, it would make managing all of them much easier.