Just did, runs on my Think Penguin NAS / hypervisor now. But my UPS shows 143W total power consumption (PC, monitors, two switches and the tpnas), which is very spicy. Shutting down the big monitor shows 100W sucked up. Stopping the 2.5G switch, 85W. So the tpnas uses about 70W with proxmox (used to only draw about 40 at idle with void). Stopping the VM shaves off 10W.
Thankfully this is not going to be up 24x7.
PEBKAC. Well, kinda. Yesterday I updated the VM once I “finished” (there’s a lot more to do on my s6 VM) and got a kernel update. But the initramfs didn’t get regenerated. In my infinite wisdom, I uninstalled bash (why should I keep it if I use ksh anyway?) and dracut
failed to run (although the package manager didn’t report dracut / initramfs errors - or maybe I wasn’t paying attention, I was tired).
In a chroot, tried to reinstall the kernel via the package manager, I definitely didn’t see any error there. But the initramfs wasn’t there. Running dracut manually… “file not found.” which dracut
showed the file, so that couldn’t have been it. The kernel (for which the version I was trying to make the ramfs) was present (triple checked). I looked at dracut (it’s just a shell script) and I notice the shebang… #!/bin/bash -p
.
I had to troubleshoot my tpnas, as I didn’t assign some vlans to it in the switch and the network config on it was also bad (it used to work in proxmox 7, idk what happened in 8?... freaking proxmox… well, now I can just assign a vlan per interface, I don’t need to connect it to a bridge connected to a vlan int to the main bridge - man, am I glad proxmox is only my actual lab and not my homeprod box).
So, that leads me to virt-manager. The VM was kernel panicking because it couldn’t find "unknown-block(0,0). That’s because of the missing initramfs. But proxmox with its novnc
console showed me the early error message after grub booted the entry, but virt-manager was getting hung at the TianoCore banner and didn’t show me the error the OS was giving. Proxmox is using the same OVMF / TianoCore UEFI virtual firmware for VMs, but the banner doesn’t show up.
So, idk if this is a bug in the spice server virt-manager is using, or the OVMF configuration shipped by the distro.
And yes, this was my mistake, I own it, but I wish the tools I’m using wouldn’t get in my way and would show me what the issue is. If I was booting bare-metal, I’d likely have seen the kernel panic immediately.
I’ve converted chronyd and cronie-crond to execline longrun services. You can still use the runit sv run files written in shell, even without modifying most of them. You’d need to change the ones that contain runit specific code, e.g. things that call upon sv check
to verify if another service is up before executing the main daemon. But those are rare. But since I’m converting to s6, might as well stick to what “upstream recommends.”
I was surprised that most tutorials and implementations of s6 I found online combine s6-linux-init and s6-rc folders under /etc/s6. That’s not a hard thing to do, but requires that when you call s6-linux-init-maker to update the init (basically almost never) to specify the parameters to replace the default behavior / location where the program looks.
Default is /etc/s6-linux-init/current. When you generate a new init, you can specify a new location and symlink “current” to that location (I do it in the same folder, with a different name). I think artix-s6 was using /etc/s6/current.
I think so was the old config duncaen wrote for void ages ago (I didn’t realize I was looking at a git commit from 2019!). Well, because of that, a lot of work that I needed to do was avoided, but I still had to troubleshoot execline scripts that were using antiquated utilities (s6 is an evolving project and stuff like s6-linux-utils and s6-portable-utils are always getting updated). The tool s6-test
was deprecated in some version and Laurent Bercot recommends you use eltest from execline
instead.
Well, I didn’t have the time to check eltest
for compatibility and the parameters / flags it accepts, so I just used test
(removing the “s6-” prefix basically).
I’m still working on understanding some s6 concepts, in particular the dependencies. IDK what’s going on, but the shutdown sequence looks wrong.
For example, mount-rw is a oneshot that just runs “mount -o rw,remount /” and has no “down” service file, so it doesn’t really matter that it gets stopped early on. But if it was, then services like dhcpcd-eth0-log would fail miserably, because the rootfs wouldn’t be mounted anymore (again, if the script would be doing unmounting, which is not recommended, s6 takes care of unmounting any fs at the shutdown sequence).
That means dependencies might not be properly respected. I have given dependencies on bundles, expecting that services in a bundle would all inherit the dependency, but now that I look at this, I don’t think that’s how it works. That’s what testing’s for! But boy, will it be a ROYAL PITA to configure dependencies manually for all services. I was sold on the whole premise of bundles and was expecting that I can just name a bundle as a dependency and expect everything to wait for it.
Looking through the config, it seems like it’s possible I missed to add some dependencies in the configs.
Ugh, I need to debug all dependency files… that’s what testing’s for, that’s what testing’s for, that’s what testing’s for, that’s what testing’s for…