I have been working on this and other Ansible collections for about a year now. They have undergone lots of chaotic changes as I learn how to use the platform. Now that the chaos has begun to settle and I am feeling more confident in my Ansible skills, it’s time to get serious about documenting what I have and pushing it to Github.
While I won’t be able to work on this an hour a day as required by the Devember guidelines, I will be making a concerted effort on this project through the end of January. My hope is to at least average an hour per day.
I had begun work on this around Devember of last year. At that time, I had a Site collection that I have since broken out into host, network and services collections. My focus for this year will be my inventory role and host collection as the others are still undergoing major changes.
Initial commit of the inventory role has been pushed.
This generates inventory boilerplate (host_vars and group_vars). It optionally (on by default) will set up a git repo in your inventory directory, automatically commit changes to it and provides a handler for committing changes.
Some handy default values for comments and operating systems are provided as well.
The initial commit to the host collection coming soon…
This includes my lookup plugin first_found_by_host_attributes which will look in the search path for tasks, vars or template files that match the host system from most to least specific. For example, I can find a vars file that matches the exact version of the distribution or just match as SSH hosts. It extends the builtin first_found plugin.
Also included is my connection role which will sort of try to brute force a connection based on common default administrator names, possible IP addresses in the inventory and/or local Vagrantfile. This supports both standard SSH hosts as well as RouterOS hosts. Once a valid connection is established, the relevant connection variables are written to the host’s host_vars file automatically. The role does not attempt to brute force any passwords. It assumes you have an SSH key on the system already.
o0_o.host.privilege_escalation is pushed. It will automatically detect and configure Ansible’s become method for you including proper SELinux configuration for sudo. ansible_become_method is written to the host’s host_vars file at the end of the role unless the result is that it is undefined as in the case of RouterOS which has no become method or if you’re connecting as root.
There are some dependencies in other roles that aren’t up yet, but as-is, it will run if you don’t gather facts.
o0_o.host.time is pushed. It syncs the host’s time with localhost. This is useful for virtual machines that have been suspended and come back online with huge time drifts that aren’t immediately corrected by NTP. It allows for different time zones configured on remote and local hosts except on RouterOS which has a kludgy clock cli so I’ve just decided to require GMT in that case.
SSH hosts are limited to the raw module because huge time drifts can break package managers which prevents Python from being installed. Speaking of, the next role will be o0_o.host.python_interpreter which will install Python on a remote host that doesn’t have it. After that we can finally gather facts.
Sorry for the delay here. After looking at my existing Python interpreter role, it was clear that I needed to refactor out a package manager role. I am actively working on this and hope to have it pushed by end of year. That said, package manager configuration is relatively complex so you might not see any commits for a while
connection is a dependency for privilege_escalation defined in meta/main.yml
inventory is included in the connection role
privilege_escalation begins to run after its connection dependency
privilege_escalation defines ansible_become_method (supports sudo or doas), but needs facts, software_management, python_interpreter and mandatory_access_control to configure sudo
software_management needs time to be accurate or repo certificates will fail to verify
facts, time and software_management’s capabilities are limited while running without a Python interpreter, so python_interpreter runs them again as soon as Python is installed and the ansible_python_interpreter is defined.
Because of how interdependent these roles are, running any one of privilege_escalation, software_management, time or mandatory_access_control will all result in the same final host state for SSH hosts. Since much of this isn’t applicable to network devices, I use time to bring all host types to my first configuration milestone where Python, privilege escalation, system time, package managers and mandatory access control are all fully configured. This will work even in sandboxed or air-gapped networks that rely on local mirrors (local repositories would need to be defined in the inventory).
Note that AppArmor on Arch and Raspbian requires bootloader configuration and a restart. I have not implemented that yet. Currently in those cases, AppArmor is left disabled.
Ok, so use ansible-galaxy collection install o0_o.host --pre until a stable release exists.
Also now realizing that it will fail almost immediately because the o0_o.inventory role tries to scrape the collection version from galaxy.yml for comment headers but Ansible galaxy doesn’t actually distribute galaxy.yml because it’s just intended to facilitate metadata for importing the collection into Galaxy.
If anyone cares to try, the galaxy.yml file is in the repo.
I’ll probably just take out the version scraping altogether. I thought it was nice to have it in the comment headers though.
Alpha 4 is up which adds the users role. On first run, it will dump a users dictionary into each host’s host_vars file which is configurable. You can also simply supply something like this:
Which will set up my_admin with passwordless admin privileges and your SSH key. Note that this would also delete any non-system users except the current Ansible user. To preserve the current user state, run the role without supplying a users dictionary and then edit what shows up in the inventory.
Here is an example of what the users dictionary looks like after an initial run on a Debian Vagrant VM.