Hello everyone, I’m back to discuss Ansible and Ansible Execution Environments. In my previous blog, we talked about why and how execution environments are critical for a successful Ansible implementation. I hope my guide was easy to follow, but as you may have noticed, the process requires a significant amount of manual effort to keep the containers updated.
Now imagine you’re using multiple execution environments for different purposes. You might have one for your Windows deployments, another for your network devices, and yet another for use in air-gapped environments. You need to keep all these environments up to date, and you must be able to test them before making them available to your colleagues.
The solution to this problem is… Ansible! I’ll show you how we can use the Red Hat CoP (Community of Practice) collections to automate the building of our execution environments. I won’t be showing you a full working pipeline; instead, I’ll focus on demonstrating only the building stage of the execution environment (EE).
As an SRE/DevOps I want to build a pipeline to automatically manage the lifecycle of my execution environment.
We can install the collection using this command:
ansible-galaxy collection install $COLLECTION-NAME$
This is a slightly modified Ansible Playbook that I use on my lab machines, and it should be easy to implement in your pipelines.
When writing your own playbook, the structure of the ee_list
variable must align with the contents of the execution-environment.yml file, which was discussed in my previous blog.
---
- name: Playbook to create custom EE
hosts: localhost
gather_facts: false
vars:
# For controller configuration definition
ee_builder_dir_clean: false
builder_dir: .
ee_update_base_images: false
ee_base_image: quay.io/rockylinux/rockylinux:9
ee_container_policy: ignore_all
ee_pull_collections_from_hub: no
ee_builder_dir_clean: true
ee_image_push: false
ee_verbosity: 3
ee_list:
- name: custom_ee
alt_name: Custom EE
tag: 1-11-21-2
dependencies:
python_interpreter:
package_system: python3.9
python_path: /usr/bin/python3.9
ansible_core:
package_pip: ansible-core
ansible_runner:
package_pip: ansible-runner
system:
- python-requests
- python-pyyaml
- git
- python3-devel
python:
- pytz # for schedule_rrule lookup plugin
- python-dateutil>=2.7.0 # schedule_rrule
- awxkit # For import and export modules
galaxy:
collections:
- community.general
- ansible.posix
- ansible.utils
options:
package_manager_path: /usr/bin/dnf
images:
base_image:
name: quay.io/rockylinux/rockylinux:9
roles:
- infra.ee_utilities.ee_builder
...
This Ansible Role (infra.ee_utilities.ee_builder
) can also automatically push the new container to a container image repository of your choice. Please refer to the documentation to explore all available options.
As you can see from running the podman images
command, my execution environment was built successfully (localhost/custom_ee).
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/custom_ee 1-11-21-2 8f40474a32f3 About a minute ago 453 MB
ghcr.io/ansible/community-ansible-dev-tools latest f6df2ac37aae 3 weeks ago 1.3 GB
quay.io/rockylinux/rockylinux 9 3c8f8ff398c0 3 months ago 237 MB
Now all you need to do is build the rest of the pipeline 🙂
Did you find this article interesting? Are you an “under the hood” kind of person? We’re really big on automation and we’re always looking for people in a similar vein to fill roles like this one as well as other roles here at Würth Phoenix.