Introduction to Terraform
Overview
Let us start with a quick definition from Wikipedia:
Terraform is an infrastructure as code software by HashiCorp. It allows users to define a datacenter infrastructure in a high-level configuration language, from which it can create an execution plan to build the infrastructure […]. Wikipedia
Building up the infrastructure with Terraform can happen in many environments. One of them is Azure. In this lab you are going to explore Terraform provider for Azure.
This lab is not providing you copy&paste ready code. Instead, you have to find the solution yourself and use the terraform documentation. There are also tons of hints to solve the challenges.
Preparation
This lab assumes that you have a resource group assigned to you. If not, please create a resource group before you start with the exercise.
Make use of the Terraform Azure Provider Documentation to solve the challenges.
Challenge 1: Get familiar with the Terraform Loop
- Create a Storage Account via Terraform in your resource group. Hints:
- Add a tag to your deployment and issue a new deployment.
- Detect configuration drift by modifying the tag of your storage account in the Azure portal and re-running the Terraform deployment. Hint: look at the terraform plan output to see the drift.
- Update the resource in Azure with terraform to reverse the configuration drift.
- Destroy the created resource with Terraform. Hint: terraform destroy command
Challenge 2: Introduce Variables, create resources with dependencies and use Data Sources
- Provide the name of the resource group that you want to deploy your storage into as a variable to the terraform deployment. Hints:
- Terraform Variables.
- Terraform takes all input files within a directory. That is you can split your code in multiple files. Best practices for variables: create a file like ‘variables.tf’ and put your variable declaration into that file.
-
Create a container (comparable to a sub-folder) inside the storage account. Think about how you reference the storage account. Just by name (a simple string)? What would be the consequence? Is there a better way?
- Most likely you hard-coded the location by setting the field as a string in the storage account properties. Coincidentally, this is perhaps the location of the resource group? If so - great. This is a best practice. But how to make it even more transparent? Can’t you just reference your already existing resource group? Similar to the reference you used between the storage account the storage container? Yes, you can! With Data Sources. Hint: Resource Group Data Source
Challenge 3: Use Terraform Utility Functions and generate Output
Think about a typical challenge with storage accounts and other multi-tenant resources: getting a unique name. Reason: they become a publicly listed hostname and hostnames have to be unique. How can you achieve that with Terraform?
- Create the storage account with same approach as in the first challenge or just continue in your sources. Take the input variable for the storage account name as a prefix and concatenate a (pseudo-)unique suffix. Hints:
- Look into locals to introduce local calculated values.
- Maybe hashing something that is unique can help? Look into interpolations.
- Or instead of hashing maybe a random value generator can help? Random Provider. Think about advantages disadvantages of random vs hash.
- Generate a (sensitive) output that return the storage account’s connection string. Hints:
- Use the
terraform output
command to print the information as JSON. Interesting, right? Although we won’t do anything with that JSON now, it gives you an idea how this output can be fed into other tools or systems. - Destroy everything and come back to a state where no resource is located in your Resource Group.
Challenge 4: Combine Multiple Resources to build a VM
You will deploy a Virtual Machine in this challenge. As you might know, an Azure VM consists of multiple elements. The great thing about Terraform is that you can build up things incrementally. So in each step, feel free to deploy the intermediary state by running a terraform apply. When you look at the terraform documentation you can basically get a copy and paste ready solution. Feel free to copy over the resource config, but do it step by step to gain an understanding what is actually happening.
- Create a fresh new folder for this task.
- Start by setting up a virtual network with a subnet. You can choose any private IP range.
- Next, create a Public IP address and Network Interface Card (NIC) resource. Make sure the NIC is registered in the previously created subnet.
- Create a Linux VM resource linking it to the NIC. Make sure to use a small sized VM (1 vCPU, e.g.
Standard_D1_v2
). - Test to connect the the VM via the configured username/password or SSH key.
- Finally, destroy everything. Re-creating all resources would now as simple as just go through a new plan/deploy cycle. That is, you should now be in a state where no resource is located in your Resource Group.
Challenge 5: Doing deployments in Cookie Cutter Style
Imagine you need to deploy a world-wide used application on Azure. For example, you want to have a frontend available in US, Europe and Asia Pacific. And tomorrow you might need another instance in South Africa. In the best sense of the programming paradigm Don’t Repeat Yourself, it is strongly discouraged to copy and paste three or four versions of your Azure resources. Instead, you should only take e.g. the locations as an input.
- Create a fresh new folder for this task.
-
Copy the following snippet to your
main.tf
for a start:variable "locations" { type = list(string) default = ["westeurope", "westus"] } output "locations" { value = var.locations }
-
Now use Terraform’s count to deploy a Web Application (and its required App Service Plan) in each of those listed regions. You can use the following config for the SKU to use free tier resources:
sku { tier = "Free" size = "F1" }
-
Now add a new region to the list. Either via tweaking the default value or via overriding the variable on the terraform command line. Hint: You get a list of Azure DC name via:
az account list-locations --query '[].name'
. A new instance should be created. - Destroy everything.