If you regularly follow this blog, you know that we automatically generate and test a NetEye ISO every night. Starting from the ISO, we can also automatically provision a NetEye virtual machine in a VMware vSphere infrastructure. One might wonder, “What if, since I already have these two ingredients available, I would like to automatically provision a NetEye 4 cluster?”
Well the answer is, now you can.
As a brief recap, the NetEye 4 clustering service is based on the following RedHat 7 HA Clustering technologies:
Configuring a NetEye 4 cluster, however, is not an easy task. The procedure consists of multiple steps that must be executed in the right order to arrive at a fully working cluster at the end. As you can imagine, when these steps are done manually, it is very time-consuming and subject to errors.
As you have surely discovered in our previous #devops posts, we do love Ansible. In this case, we also decided to use Ansible for the purpose of provisioning and configuring our clusters.
Logically, we can organize our Ansible playbook into the following substeps:
If you’ve already provisioned your machines, you can skip the first step. We resort, instead, to our vSphere provisioner that we described in the previous blog post. So, we can add these lines to our playbook:
- name: provision instances for the cluster
import_playbook: provisioning.yml
to import the playbook responsible for the provisioning of the machines that will compose the cluster.
Let’s assume we have provisioned the machines in our cluster. Now we must start configuring them, where the next step is their pre-configuration, consisting of setting up passwords, IP addresses, network interfaces, ssh access, hostnames, etc.:
# sets prerequisites
- name: setup prerequisites for cluster setup
import_playbook: cluster_prerequisites.yml
tags:
- prerequisites
One potential problem here is that, starting from a clean NetEye ISO, the system will ask the user to change the default password with one that is more robust. How can we manage to interact with the system, change the password successfully, and continue with our cluster configuration, all without ever leaving Ansible?
Luckily, among the plethora of Ansible modules, we also find expect
. As you can imagine, this module allows for the execution of a command and responds to any resulting prompts. We can thus add a task to our playbook, like the following:
# change the default password
- name: change default password
delegate_to: localhost
become: no
expect:
command: ssh <the user>@<our_host>
timeout: 10
responses:
"password:":
- <the old password>
- <the old password>
- <the new password>
- <the new password>
<expected_regex>: exit
register: status
changed_when: "status.stdout is defined and
'authentication tokens updated successfully' in
status.stdout"
With this Ansible task, we can update the user password with a new one, and allow Ansible to use the new password to connect with the machines during the configuration.
The next thing to do is to retrieve an unused IP from the range of the available IPs to use as an externally facing cluster IP. Another problem may immediately arise: how to avoid having IP clashes, especially in testing environments where cluster creation and destruction are frequent, the range of IPs is limited, and you cannot resort to DHCP for IP assignment?
We solved this problem by implementing an API written with Flask that keeps track of the state of the IPs in the available range in real-time and answers those IP requests that come from our Ansible runs.
Having obtained a free cluster IP from our API, we can now configure a secondary network interface on our machines to ensure inter-node communication, exchange SSH keys between the nodes, update hostnames and the /etc/hosts
file, and install any combination of our optional NetEye modules (Log management/SIEM, SLM, vSphereDB, etc.) to ensure that we can easily test any software permutation in a cluster environment.
Once the machines have been successfully pre-configured, we can proceed to the cluster setup. To do so, we add:
# sets the cluster nodes
- name: setup cluster nodes
import_playbook: cluster_setup.yml
tags:
- setup
to our main playbook. The cluster setup updates the basic cluster configuration according to a template that can be modified by the user according to her/his needs, configures the fencing at node-level, and runs the scripts responsible for actually configuring the cluster and its resources. We decided to reuse the existing cluster_base_setup.pl
and cluster_service_setup.pl
scripts that are currently shipped with NetEye, instead of rewriting everything from scratch.
At this point, the final step is to perform the NetEye installation on the various nodes. This can also be done via Ansible, by importing the following statements into our playbook:
# performs secure install
- name: secure install
import_playbook: secure_install.yml
tags:
- secure_install
In the secure_install.yml
playbook, we put all the nodes in standby except one. We then execute the autosetup scripts to perform software configuration before the first usage, and then we un-standby the nodes. Et voilà! Our automatically provisioned NetEye 4 cluster is ready and fully working.