I’m currently in a bit of a struggle.
I have to keep around 15 debian servers and 2 windows servers up-to-date.
So, I went to the WWW and searched for some software.
I mainly kept searching for linux “patch management” software, because it would be a relief for me if I just have to patch 2 servers instead of 17.
puppet (moderate expensive: about 1.500$ for 12 Nodes)
chef
ansible (fucking expensive it’s like 10.000$ for 100 Nodes)
saltstack
those are the “big players” it seems like for me.
First I tried puppet and TBH the cli only configuration makes me feel a bit uncomfortable.
At work we use Connect Wise Automate, that’s why I’m used to a graphical interface. FYI.
I thought I just ask a question to all of you, how and what software do you use for keeping your servers up-to-date?
Maybe someone has a solution I haven’t found yet.
All servers are virtualized hosted on proxmox VE. FYI. Idk if this is somewhat relevant.
I use Chocolatey for the Windows machines I admin. The client itself is just a package manager, although their business edition ($16/node/year, 100 node minimum order) does have a central management application, and the chocolatey client also integrates with puppet and chef at a minium, and probably with the others you mentioned as well.
I’m mostly just running machines for myself and family, so it’s not a big deal to SSH in and run an update through the package manager (apt, choco, Pacman).
Webmin is available for all *nix like OS-es, but AFAIK not Win-OS. It has a Cluster entry in the menu, but as I only use 1 server ATM I haven’t used that functionality. It’s GUI-based and has a Debian package (available directly from the Webmin repo). So, essentially it’s free but I don’t know about limitations or functionality that require a paid version. I reckon you could find that on the Webmin pages somewhere
Eh? Just either cron it, or pssh every now and then. Or even better, make a cron in a centralized server that pssh’es into all the rest every now and then to update.
Backstory: I’m too dumb to use Ansible, so I made scripts that sequentially ssh’ed into different servers to install or configure stuff, then later on I found about pssh and couldn’t stop using it.
Bonus: Windows 10 1809+ and Server 2019 have OpenSSH servers (which you need to install via PowerShell and enable the service at startup in task manager / services.msc), through which you can update via either PSWindowsUpdate module or the old wuauclt. Just make 2 scripts, one for debian, one for windows.
Ansible is what I use regularly and, like vlycop stated, it’s totally free. I’m assuming you’re referring to Tower’s pricing. Tower is just a GUI/central management node (or two) that helps you run Ansible (CLI) and keep track of Ansible runs. It has a bunch of useful features, but we ended up not purchasing a license for it at my last sysadmin job since we didn’t really need it (and we experimented with AWX but it also seemed to not really take off).
Salt is also really great, but as far as I know, it doesn’t really have a GUI either, and the learning curve is a bit high. I would use this if you needed an agent-based config management solution (e.g. regularly keep servers configs/software up to date). Vanilla Ansible does not use an agent so if you do need regular automatic updates, then you either have to roll your own solution or use Tower/AWX/etc.
Speaking of Proxmox, I think the only publicly available/actively maintained “runbook” in any of the config management solutions you listed for deploying PVE is actually one done in Ansible. That may be of interest. (I will also note that I’m the primary maintainer for it, search for “proxmox ansible” on github.)
At work, we use Spacewalk to centralize the update deployment so we don’t have a billion servers hitting external mirrors. You can use it to deploy updates but the UI is kinda terrible IMO and the OSS project was killed off earlier this year so wouldn’t recommend it. We then use Saltstack to do updates en masse but utilize the Eratta reporting feature on Spacewalk to scrape our environment to see if we have missed any patches anywhere.
In the professional space it looks like most people go with RedHat satellite, however, I know of no Enterprise that uses Debian so the enterprise grade patch management tools don’t focus on that flavor of distros as much.
I also use Saltstack for home use in combination with dnf-automatic which is a package that is a glorified systemd timer to install security updates so I don’t have to mess with it. Then every month or two I just do a sudo salt '*' pkg.upgrade on my salt-master at home and all my machines are then updated.
If you’re inquiring for how to automate patching on your work environment then you have to find what works for you. For prod fully automating updates via blind fire is still a terrible idea. You want to at least have a host to pilot new updates and after a while of baking then update to that patch level for the rest of them with your most critical hosts (like DNS) going last.
The issue with cron is that there is no way to audit this until after the fact. So you should use Systemd timers because you can hook that service into an alert monitor to tell you if it ever fails so you can correct your service before the event fires off so you don’t miss your window. On legacy systems this might not be an option so a common hack is to use cron to also send an email once the first part of the script is done so you have a record for auditing. Then if you don’t get your daily or weekly email you know there was a problem.
Furthermore, all patching events should be communicated to appropriate teams and the event of patching coordinated within any expected downtime windows so as to not catch any clients off guard.
Just something I want to add, is I noticed someone mentioned spacewalker. If you can get something like Spacewalker, or Pulp (to RedHat Satellite like what AWX is to Tower) or even a local repository set-up it would be a great idea.
It would increase security, and mean you could control what patches you want to install on hosts to meet any regulations/criteria/processes you have.
My advice would always be to do it manually first until you know how you want to automate it.
Using something like AWX is good but beware: You will need to keep it updated, you can’t run it a couple of versions behind and get patches.
That’s something that always held me back from deploying AWX in small environments.
15 hosts is pretty small, and to be fair you could probably run it just as easily without using AWX and just using Ansible and using it on an Ansible Control Node (free).
You could also probably get some playbooks/roles online which you could just change some variables for and run against your hosts, which would make it very easy.
Just create a scheduled task, and set serial: 1 and you’d be off and going.
(setting serial 1 ensures only 1 host gets patched/rebooted at a time, so your service doesn’t go down if load balanced).
AWX isn’t repository software if that’s what you’re referring to? It’s an upstream of Ansible Tower. I’m not referring to how you patch the machines, but rather how you patch AWX itself.
It just means that like Fedora, you won’t be able to stay on an older version say for an additional 6-8 months and receive security patches for any requirements your organization may have.
Concerning patching, you’d still have to set-up a repository and could use something like pulp (So you would still have to set-up something like Pulp ( Pulp | software repository management (pulpproject.org)) to have a repository. )