oO.o's Ansible Host Collection (Formerly Devember 2022)

oØ.o’s Host Collection for Ansible

I have been working on this and other Ansible collections for about a year now. They have undergone lots of chaotic changes as I learn how to use the platform. Now that the chaos has begun to settle and I am feeling more confident in my Ansible skills, it’s time to get serious about documenting what I have and pushing it to Github.

Devember 2022

While I won’t be able to work on this an hour a day as required by the Devember guidelines, I will be making a concerted effort on this project through the end of January. My hope is to at least average an hour per day.

I had begun work on this around Devember of last year. At that time, I had a Site collection that I have since broken out into host, network and services collections. My focus for this year will be my inventory role and host collection as the others are still undergoing major changes.

Inventory Role

Host Collection

4 Likes

Initial commit of the inventory role has been pushed.

This generates inventory boilerplate (host_vars and group_vars). It optionally (on by default) will set up a git repo in your inventory directory, automatically commit changes to it and provides a handler for committing changes.

Some handy default values for comments and operating systems are provided as well.

The initial commit to the host collection coming soon…

1 Like

Initial commit to the host collection is up.

This includes my lookup plugin first_found_by_host_attributes which will look in the search path for tasks, vars or template files that match the host system from most to least specific. For example, I can find a vars file that matches the exact version of the distribution or just match as SSH hosts. It extends the builtin first_found plugin.

Also included is my connection role which will sort of try to brute force a connection based on common default administrator names, possible IP addresses in the inventory and/or local Vagrantfile. This supports both standard SSH hosts as well as RouterOS hosts. Once a valid connection is established, the relevant connection variables are written to the host’s host_vars file automatically. The role does not attempt to brute force any passwords. It assumes you have an SSH key on the system already.

1 Like

o0_o.host.privilege_escalation is pushed. It will automatically detect and configure Ansible’s become method for you including proper SELinux configuration for sudo. ansible_become_method is written to the host’s host_vars file at the end of the role unless the result is that it is undefined as in the case of RouterOS which has no become method or if you’re connecting as root.

There are some dependencies in other roles that aren’t up yet, but as-is, it will run if you don’t gather facts.

1 Like

o0_o.host.time is pushed. It syncs the host’s time with localhost. This is useful for virtual machines that have been suspended and come back online with huge time drifts that aren’t immediately corrected by NTP. It allows for different time zones configured on remote and local hosts except on RouterOS which has a kludgy clock cli so I’ve just decided to require GMT in that case.

SSH hosts are limited to the raw module because huge time drifts can break package managers which prevents Python from being installed. Speaking of, the next role will be o0_o.host.python_interpreter which will install Python on a remote host that doesn’t have it. After that we can finally gather facts.

1 Like

Sorry for the delay here. After looking at my existing Python interpreter role, it was clear that I needed to refactor out a package manager role. I am actively working on this and hope to have it pushed by end of year. That said, package manager configuration is relatively complex so you might not see any commits for a while :frowning:

1 Like

@SgtAwesomesauce does this make your eyes bleed?

3 Likes

Fuck you for showing this to me.

4 Likes

Lol. I was able to generalize package manager and repository configuration, but at what cost?

3 Likes

4 Likes

I’m attempting to offload a lot of it into vars and defaults, and then of course comment it thoroughly.

I think that’s as good as it can get without ridiculous levels of hand-holding.

Pushed a few commits tonight. Things are working in my testing environment. Still some cleanup to do, but feels good to reach a milestone.

Particularly proud of the work done with package managers in my software management role. It went through many revisions, including the documentation which I found challenging to write.

1 Like

Added a feature called “Role call” which tracks how the roles are executed and prints the dependency tree at the end of the play.

Here is a Rocky Linux VM that was provisioned without Python installed. There is one call to o0_o.host.privilege_escalation.

ok: [rocky8.hq.example.com] => {
    "role_call": [
        "o0_o.host.connection",
        "  o0_o.inventory",
        "o0_o.host.privilege_escalation",
        "  o0_o.host.facts",
        "  o0_o.host.software_management",
        "    o0_o.host.time",
        "  o0_o.host.python_interpreter",
        "    o0_o.host.facts",
        "    o0_o.host.time",
        "    o0_o.host.software_management",
        "  o0_o.host.mandatory_access_control"
    ]
}
  1. connection is a dependency for privilege_escalation defined in meta/main.yml

  2. inventory is included in the connection role

  3. privilege_escalation begins to run after its connection dependency

  4. privilege_escalation defines ansible_become_method (supports sudo or doas), but needs facts, software_management, python_interpreter and mandatory_access_control to configure sudo

  5. software_management needs time to be accurate or repo certificates will fail to verify

  6. facts, time and software_management’s capabilities are limited while running without a Python interpreter, so python_interpreter runs them again as soon as Python is installed and the ansible_python_interpreter is defined.

Because of how interdependent these roles are, running any one of privilege_escalation, software_management, time or mandatory_access_control will all result in the same final host state for SSH hosts. Since much of this isn’t applicable to network devices, I use time to bring all host types to my first configuration milestone where Python, privilege escalation, system time, package managers and mandatory access control are all fully configured. This will work even in sandboxed or air-gapped networks that rely on local mirrors (local repositories would need to be defined in the inventory).

Note that AppArmor on Arch and Raspbian requires bootloader configuration and a restart. I have not implemented that yet. Currently in those cases, AppArmor is left disabled.

1 Like

And it’s up!

Try it out! Let me know if any of the documentation needs clarification.


Related question, why is reStructuredText ever used over markdown?

1 Like

I believe it’s for auto-magic linking to other modules/classes/etc, though maybe that’s more a sphinx thing than reStructuredText itself…

It was also around before markdown IIRC, not to mention Ansible is python and sphinx/rst used a lot in python.

Were you looking for a serious answer? :joy: :joy: :joy:

1 Like

Yeah I know… wish they’d switch over but I’m sure that would be a lot of work.

1 Like

Womp

Ok, so use ansible-galaxy collection install o0_o.host --pre until a stable release exists.

Also now realizing that it will fail almost immediately because the o0_o.inventory role tries to scrape the collection version from galaxy.yml for comment headers but Ansible galaxy doesn’t actually distribute galaxy.yml because it’s just intended to facilitate metadata for importing the collection into Galaxy.

If anyone cares to try, the galaxy.yml file is in the repo.

I’ll probably just take out the version scraping altogether. I thought it was nice to have it in the comment headers though.

1 Like

Alpha 3 is up. If anyone wants to test it out, please do!

ansible-galaxy collection install o0_o.host --pre

Once it’s installed and you have a basic inventory, run ansible-playbook -i path/to/inventory o0_o.host.m1

1 Like

Alpha 4 is up which adds the users role. On first run, it will dump a users dictionary into each host’s host_vars file which is configurable. You can also simply supply something like this:

users:
  my_admin:
    adm: true
    lock: true

Which will set up my_admin with passwordless admin privileges and your SSH key. Note that this would also delete any non-system users except the current Ansible user. To preserve the current user state, run the role without supplying a users dictionary and then edit what shows up in the inventory.

Here is an example of what the users dictionary looks like after an initial run on a Debian Vagrant VM.

# BEGIN ANSIBLE MANAGED BLOCK: Local users (Editable)
users:
  root:
    gecos: root
    group: root
    groups:
    - root
    home: /root
    shell: /bin/bash
    ssh:
      auth: []
      id: []
    uid: '0'
  vagrant:
    adm: true
    gecos: vagrant,,,
    group: vagrant
    groups:
    - audio
    - cdrom
    - dip
    - floppy
    - netdev
    - plugdev
    - vagrant
    - video
    home: /home/vagrant
    shell: /bin/bash
    ssh:
      auth:
      - pub: AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ==
        type: rsa
      id: []
    uid: '1000'
# END ANSIBLE MANAGED BLOCK: Local users (Editable)

Support for RouterOS here was a little rocky, but it works. Of course there is much less to configure for RouterOS users, but administration and authorized SSH keys are handled.

1 Like