Being able to perform automatic provisioning of infrastructure is becoming a must-have feature for teams that want to be able to automate their entire provisioning and deployment process, and the Wuerth Phoenix team is no exception.
Tools like HashiCorp’s Terraform can help teams in reaching this goal by providing the capability of writing IaC (Infrastructure as Code) to define, provision, and manage infrastructure. This makes the process of provisioning automatic, repeatable, and more reliable of performing all the operations manually. It also offers a representation of the state of the infrastructure that can be stored in versioning systems, and that is readable by almost anyone without much hassle.
To install Terraform and start playing with it, you can follow the instructions here. To verify that Terraform is correctly installed, type the following command in your Linux box:
# terraform version
Terraform v0.11.13
If you see output similar to that above, you can be sure that Terraform is installed and ready to provision your infrastructure.
Before starting with the actual code, it is worth mentioning that Terraform supports multiple infrastructure providers (AWS, Google Cloud, Mi VMWare vSphere), which gives you a lot of freedom when deploying your products. In this post, I will use vSphere as running example, as we work with VMWare products here in Wuerth Phoenix. However, feel free to adapt the code for any provider you would like to try, just make sure that you have a valid account for that provider.
Having looked at the prerequisites for writing Terraform code, we can now start with the code itself. Terraform code is written in a declarative language called HCL (HashiCorp Configuration Language), which means that you, as a user, declare what infrastructure resources you want to deploy, and Terraform then handles the underlying details for you.
Let’s start with a simple but common scenario. Imagine you want to clone a virtual machine template that you have in your vCenter, in order to run some operations on it.
First of all, since we decided to work with our vSphere product, we can declare our provider as in the following (simplified) code:
provider "vsphere" {
# credentials for accessing our vsphere server
user = "${var.vcenter_user}"
password = "${var.vcenter_password}"
vsphere_server = "${var.vsphere_server}"
# version of the provider plugin we want to target
version = "~> 1.9"
}
This tells Terraform to use the vSphere provider with the credentials specified in a few variables (indicated with var), and to use a specific version of the provider (1.9 at least). Specifying the provider version is particularly useful when running Terraform in production, to ensure that you don’t update the provider with versions that you have not yet tested. We can save the code snippet above in a file called provider.
To declare our variables, we can use another .tf file that we call, in this case, variables.tf. This file will contain all the variables that we will use across our Terraform code. Variables can be declared as shown in the following:
...
variable "vcenter_user" {
type = "string"
description = "The user to access vSphere."
}
variable "vcenter_password" {
type = "string"
description = "The password for the current user."
}
variable "vsphere_server" {
type = "string"
description = "The vSphere server."
}
...
Note that we did not specify any default value for our variables, because we want to let the user specify the values in a different place in our code, as we will see later on in this post (
Before creating our new resource (the new cloned virtual machine), we need to declare a few data sources in order to reference some existing resources in vSphere that we must pass to the new resource we want to create.
...
data "vsphere_datacenter" "dc" {
name = "${var.datacenter}"
}
data "vsphere_resource_pool" "pool" {
count = "${var.resource_pool != "" ? 1 : 0}"
name = "${var.resource_pool}"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
data "vsphere_datastore" "ds" {
name = "${var.datastore}"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
...
The above (simplified) example declares some data sources. Terraform will use what we supply in each data block to discover these resources without having to create them. In our case, the data sources will indicate the information related to our datacenters, resource pools, datastores, and so on. We’ll save this snippet in a file that we call datasources.tf.
Finally, it’s time to declare our new resource:
resource "vsphere_virtual_machine" "instance" {
...
num_cpus = "${var.num_cpus}"
memory = "${var.memory}"
...
network_interface {
...
}
disk {
label = "disk0"
size = "${var.disk_size != "" ? var.disk_size : data.vsphere_virtual_machine.template.disks.0.size}"
}
clone {
template_uuid = "${data.vsphere_virtual_machine.template.id}"
customize {
linux_options {
host_name = "${var.vm_name_suffix}${count.index}"
domain = "${var.domain_name}"
}
}
}
}
In the snippet above, we declare a new resource and we instruct Terraform to clone one of the existing templates that we have in our vSphere. We can also specify a few attributes via the customize block (specifically, we define the host and the domain name of the new machine). We save this code in a new file that we call resource.tf.
At this point, we have all the ingredients necessary to run our code and create the new resource, but first, we create a new Terraform module called vsphere_virtual_machine with the code we have just written. We define a new main.tf file and we put some code in it, as in the following:
module "vsphere_virtual_machine" {
source = "<the folder in which to find the code of the module>"
version = "<version of the module>"
datacenter = "<the name of our datacenter>"
datastore = "<the name of our datastore>"
resource_pool = “<the name of the resource pool>"
vcenter_user = "<the user>"
vcenter_password = "<the password>!"
vsphere_server = "<the server address>"
disk_size = "<disk size"
memory = "<the memory>"
num_cpus = "<the amount of cpu>"
….
}
This file specifies the values of the variables we need to run our code (vSphere variables, VM variables, and credentials). As you can see, there is also a variable called source that points to the location in which we have stored our Terraform code.
Assume that you have a folder structure in your working directory as indicated below:
terraform_project/
main.tf
module_src/
datasources.tf
provider.tf
resource.tf
variables.tf
With the help of a terminal, navigate to the root folder of your Terraform project (terraform_project in the example above) and run the command:
# terraform init
The command performs the setup of the local environment, downloads the code of the provider we want to use to create our resources, and initializes it. If terraform init runs without generating errors, we can now tyoe:
# terraform plan
This command outputs all the actions Terraform will take when performing its actual tasks. Running this command before making changes to any environment is a good way to check your code and avoid making potentially drastic changes to your infrastructure. You can also save the current plan by using the -out flag.
If the sanity check performed with the terraform plan command runs fine, we can finally execute our Terraform code and clone our virtual machine. To execute the code, we can simply type:
# terraform apply
Be careful running this command, since it actually can break your infrastructure!
What happens here is that Terraform reads our code and translates it into API calls to our vSphere. Assuming the run goes with no errors, we now have a new virtual machine on our vSphere that is based on a template of our choice and with a set of new custom attributes!
At this point, we can start playing with our new machine by installing NetEye and configuring it as desired (for instance, using an Ansible playbook).
Suppose that, however, during the configuration of our VM we accidentally ran a wrong command by mistake, compromising the state of the machine. At this point, we should eliminate it to save resources on your vSphere. Terraform can also help us
# terraform destroy
to destroy our broken VM with practically zero pain.