OS Setup Between Machines

So I am fighting this all the time and its pissing me off. I have this shit ton of machines that has now gone unmanaged for long enough that I don’t remember how they are set up, they no longer work because out of date and not just “apt update” to fix, or I’m pissed at the current condition of a project that I was working on.

I wnt to trim this up. I used to look at linux as my homebase OS family in a way but I don’t really give a fuck anymore with amiga stuff being in my life and having used a lot more OSX and F/O/BSD, nothing is really a favorite anymore, really.

My general requirements are:

  • The operating system needs to be compatible with XFS and NTFS as all my drives use it unless in a server (with exceptions)

  • Needs to have codecs built in and not bitch about “Wah Licenses” every time I just want to bloody play a game or draw

  • I need clang, non-negotiable

  • Gnome would be nice, XFCE preferred, if windows or OSX, whatever lol

  • Needs to not hitch with “hotplugged*” mpcie devices in laptops

  • Needs to be the same general design language, whethetr on a laptop, desktop, server, or remoted into (for example, my R510 has a gpu and does stuff for me, I can go to the machine itself or remote in and not have go wh-WHAAO:IDN:DIAND and now all my icons are over here or some fucked up shit [legit one of the problems I’ve had but a long time ago, just kinda pushed me off RD tools and I never learned them])

  • Needs to not hitch over a gpu swap inbetween halts, added drives, etc, whether it be USB, or not

  • Needs to not get extremely slow with USB transfers (A problem I had that I could never really trace down)

*MEANING if I were to sleep a laptop, unplug a thing, plug a new thing in, the system will just go “Oh… K” like it does with my GDP dock in POP_OS


ATM this is my operating system shortlist, however I have problems with it.


Linux

  • OpenSuse - Package manager issues at times, source issues at others, unsure of current condition of the desktop space. Have not used since I wanna say 2014 /15

  • Ubuntu - Its ubuntu. Really thats the only problem with it now, I’m tired of debian ports, but this is better.

  • Pop_!OS - No real complaints other than USB transfer issues

  • Void Linux - My main dev distro. I wish I knew how to do a custom spin and a build server so I could use this better


Windows

  • 10 - use this as main desktop thing if I stream and shit. However, since I use generally older systems in terms of processors, I wonder if I should have somehting like windows 7 on like gaming machines instead for CPU sheduling. Is there a difference between 7 and 10 for streaming or youtube? Are there kernel mods like there are for older systems?

  • 2K - Use for mostly older systems and for the super light kernel and directory systems. Wonder if I should trade out for XP howevetr for my uses theres very little differences or incompatibilities for my hardware, if any at all.

  • Dos 622 - Should this even be used are there better doses nowadays with like better driver systems / built in shit that would go oh this is a trident crystal 32 here you get sound? I mostly play old games and run software I need for comms on like a parallel or serial port. I also use this to run industrial sewing macines, but wonder what something like freedos would do or if I could get something else off of winworld and get better results? Somehow?


OSX

  • 14 - I want this to be my new standard main boot on at least a desktop running OSX, however I don’t know much about it. Is it good? Does siri work? Does it have snarky siri or did siri contract ADD by 10.14 like my iphone? Will building my own packages be a pain in the ass?

  • 11 - The newest OS that I can boot on my 3,1. I think I can get it working if I just install it, but 11 was like DLC OS. You had to pay to get audio capture, and certain GPUs work or don’t work. Is there a hacked system that maybe I could replace stock with and just use an RX480 with? Will an HD6770 work with 11 and not bitch?

  • 9 - I like this for its clean simplicity and that the systems in the background actually help with builds in macports, especially on core 2 systems. Again though, is there a hacked system I can just run with whatever GPU tho.

  • 6 - Can I get newer disk utility / service tools to run on here? I like the core duo machines as service boxes for flashing stuff and I’d like it if I were able to do even more with it.


BSD

  • GhostBSD - Is there a desktop BSD I can replace this with that won’t be 64bit only? This USED to have a 32bit version, and thats why I used it. Made setting up mail servers fast as hell for my high school and had some tools packed in that I didn’t have to look for.

  • PFSense - I want a firewallOS I can run on a macpro and not have to build from fbsd packages with more than likely missing things or stuff not wired up correctly. IDC what its based on, I just need it to work on a mac pro 1,1


Any suggestions, HMU. I know alpine is a good fast micro linux for stuff and for fast servers. But this is kinda just… how do I trim my needs down to a choice few I guess. That way when I go to set up a new machine I have like 4 or 3 choices instead of 27.

1 Like

Consider:

  • Devuan (Debian w/o systemd)
  • Funtoo (Gentoo on the edge)

As mentioned, Devuan is Debian with everything systemd- related wrecked out. Has SysVinit and OpenRC as replacements and IIRC they’re working on a 3rd init system.

Funtoo is more then a cutting-edge version of Gentoo and is source-based. Uses OpenRC as standard init system. Gnome3 is their standard DE.

As for some of your “demands”, you can’t swap the valves in the engine of your car w/o telling the motor management thingy what you did, so why do you expect an OS to not give a fsck when you change key components in the machine it’s working on? No OS is gonna accept that OOTB, but at least on Linux you can bake your own kernel that literally has support build in for every fr34kin’ device out there. It’ll be huge (expect at least 8GB or more) but everything you could possibly throw at it hardware-wise is supported :roll_eyes:

1 Like

I’m srsly over apt and debs and dependancy stew. I keep to appimage and flatpak on those systems.

Was mostly just “Can I do that and have the software not explode? It did it in this one configuration, can it be a common thing and I don’t know?”

Unless you get a proper Mac, better to just walk away. OSX on x86 will be killed off for good before 2025.

Other than that, I would say your list is painting you into a corner that can only be fulfilled by you sitting down and start building on your own Linux / BSD. Good luck!

1 Like

dos 6,22 is still a good program but with todays hardware and drive sizes its a moot point.
dos itself has a size limitation of 1 gigabyte for the partition you put it on so unless you want to create a metric @$$load of partitions for dos your wasting your time.
that being said it is still a damn good base for many of the early development programs.

its difficult to find many 32 bit distros that still maintain live and valid repos as most mainstream linux is 64 bit, even then many 64 bit distros lose their repos if they are not utilized enough.

If you want windows to run anything mission critical then follow this guideline!
Never put it online
individual updates for a machine can be selected from ms’s database and downloaded in a different machine then they can be scanned before manually transferring it to the critical machine.

finding a distro that will do everything you expect is damn near impossible because the developers do not know what you yourself want or need.

download a distro’s builder app and play around a bit.
build things the way you want and learn what you can and cannot do.

old equipment!
thats a given! maintaining a few old systems for media transfer, isolated testing, or to run specialty hardware is fine but redundancy with old machines is using valuable space and power.

NetBSD is considered the BSD that can run on anything, so if you don’t mind setting up your desktop, you could give it a try.

For firewalls, I am sticking with OpenBSD from now on. I don’t like some hand-holding from pfSense and OPNsense. If you don’t mind running Linux, I think you can get away with OpenWRT with iptables, but I never tried it, so I can’t recommend.

I’m not sure what you mean by this. I am running the GUI-less version of Void for a few of my VMs and Void has many “server packages” in its main repo. I run a grafana+prometheus VM and a separate Samba server VM. Void also has Bacula, minio, Zabbix, the whole TICK stack (Telegraf, InfluxDB, Chronograf, Kapacitor) and I find it funny that it has nrpe, but not nagios itself. And many more server packages. Now, I’m not sure what server packages you run / need for a build server. It does have jenkins and gitea, but it doesn’t have the gitlab server, just gitlab-runner.

I wouldn’t run Alpine as a server for a lot of stuff, but that’s just personal preference. I’m running it as a router and firewall OS on a RPi 3 and I may run it in the future for other stuff, like secure servers, but my first choice on my personal servers is Void, because it’s fast, familiar and more general purpose.

1 Like

That sounds like Linux i/o starvation, which only affected some systems. It took a long time, years, to be recognized. I’d start a big copy to USB and the rest of Linux would stall as if it was out of memory. Changing i/o schedulers was the solution, but with SSDs the problem mostly went away because the scheduler changed for them. The box I had this problem with died in 2016, and was mostly about 2010 vintage, though the CPU was 2005ish.

2 Likes

While I like void, it always takes me 8 years to get a BASIC configuration to install steam and garrys mod won’t crash because ‘wah those aren’t FOSS enough’

I’d like to just repack an iso for me only for installs, or maybe find a way to do what I did with my macs for a ehile, have a system power off transfer mode, shlorp stuff in as needed.

Maybe I should just have all my servers be macs :confused:

1 Like

Yeah that sounds about like my life

Once you configure your system as you like it, you can either use clonezilla, or simply just boot any linux distro, and use dd to clone your disk and pipe its output into gzip / bzip2 to compress the image. Once you want to restore the file, proceed on the inverse, unzip the file into dd to a disk, then use resize2fs (ext4) or xfs_grow (xfs) or any distro has ships with gparted (Ubuntu does, at least the live environment is useful just for this, I’ll give them that) to expand the file system. I recommend you set things up in a VM with just around 32-64GB of storage, so you won’t wait ages on dd to complete and so you will be able to dd the image to any bigger-sized disks (because you won’t be able to get it on lower-sized ones directly).

I can give you more hints about how to do that if you want to, or even make an easy-to-follow tutorial on disk backup and restore solution.

Well I mean you just solved my issue on that front I would recommend that.

2 Likes

While dd is a thing and all, many have started to use git as a dotfiles backup solution. This is potentially even more interesting if you’re on something like Void.

The basic premise is this; While it’s nice to have backups and everything, Linux is a moving target. The Linux you have today will be different from the Linux you install tomorrow, and while configs in general are pretty stable, old configs will break. Having a two year old dd image might not survive a pacman -Syu. With git dotfiles, you get automatic version history, and you are also able to go back down the tree every once in a while, to look up or restore a specific dotfile.

You can even include /etc/ configs using this technique. You could even use git branches to keep track of individual configs for individual machines.

Here is a tutorial if you want to learn more: How to store dotfiles | Atlassian Git Tutorial

Also, see the ArchWiki: Dotfiles - ArchWiki

And, https://dotfiles.github.io/ gives a quick introduction on how to sync with github, not sure if you want to use github explicitly for this given the taint of Microsoft and cloud security, but the same ideas can be applied to pretty much all git servers, be they local to your computer, on your company NAS or somewhere on the vast digital oceans of the internet…

1 Like

But it will survive an xbps-install -Su (actually 2, because you first update the package manager, then the rest of the system)
https://www.michaelwashere.net/post/2017-09-24-upgrading-the-ancient/

Depends on your setup. If you use WM with conf files, yeah, git will work, but if you use big DE and also want to install proprietary software that are meticulous to install and set up, you may want to just clone your disk over to other machines.

I could be using git, but all I actually need from my desktop is in my /home and all I need is to keep a backup around. Depending on the usage, I set my machines differently, with maybe the only thing in common being the sway conf if I use a desktop, otherwise, they’re all specialized for some tasks with not a lot of programs in common. I have a list of programs that I need in my home dir, if I ever need to switch, I just run the package manager to install the output of that file. I do have 5 programs that I use on everything, but I memorized them, so no need for a separate list.

Obviously, each individual has a preference and thus, their own setups which may or may not be highly portable to “the linux of tomorrow” as you put it. Mine sure is, because I avoid proprietary software and complicated setups (for that matter, I don’t even have a need for an icon pack other than what ships by default with things like Firefox). But not everyone could have. And if you are unable to script / automate your desktop setup, a disk image clone of it works and you don’t have to redownload your software again.

For me, I realised I don’t really care about most of my /home, and once I realised that, it was quite easy to partition into dotconfig + rsync for archival stuff like photos, old work docs etc.

As for dotfiles, having them in a git repository is a godsend, since I can experiment with stuff and go back to a stable config at my choosing. As for different machines, not a problem - git branches exist for a reason yo :slight_smile:

But whatever, I prefer the git version, a little bit more work to get it smooth but well worth the extra learning in the end. :slight_smile:

2 Likes