Looking at the dependencies, it’s the ZFS group to blame. They recently updated zfs-dkms to require Linux 5.15.999 or older. That’s after my system had already been running 5.16 for several weeks. So any update operation was triggering removal.
Libvirt depends on libvirt-storage-zfs which depends on the zpool command, so that’s why it’s trying to uninstall as well.
The fix was to manually download and install kernel--5.15.18-200 rpms and then exclude kernel-* in dnf.conf going forward.
And this shit right here is why ZFS on linux is a second class citizen (that I will never use in production), whereas on FreeBSD (or any of the Illumos/OpenSolaris derivatives) it is not.
So many people argued with me about this and I was like, “Hey, I love GNU/Linux as much as you do, but use the right tool for the job.” There is a reason why I run BSD, Solaris, and GNU/Linux in my home. BSD/Solaris is great for networking and Storage, bar none and those tools are first class citizens in that echo system.
Now if ZFS targets LTS Linux kernels, then it would be wise to stick to LTS if you value your data. IF ZoL pulls this type of stunt on LTS systems, well, BSD and Solaris are not that hard to learn.
The utility creates a txt file with the rpm version string and places it in /etc/yum/pluginconf.d/versionlock.list. This list can be managed by Saltstack or Ansible or other build tools.
Upon every update it tells you how many packages are held due to the versionlock and is more clear than just holding the version in the main conf file.
This is one mechanism for how we control kernel updates at work.
Didn’t ZFS have some major breakage starting with the 5.16 kernel, and hasn’t been fully patched?
That’s been my understanding. Too bad, too, since there’s finally been an update to the ALX WOL patches that tbqh shouldn’t even be necessary as the ALX WOL bug was always implementation specific, non-critical, and in many cases not even related to or triggered by WOL.