Here’s an interesting use case I came across -that I thought could also be a valuable resource to share- and hence this post.
I’ve worked on Terraform and am aware of the solution that it provides to IaC principles. I’ve worked extensively on Azure DevOps and know Azure Pipelines can also be good at orchestration. And I thought to myself, let me bring these two together to provision a relatively simple resource like an Amazon S3 bucket. The Terraform configuration files are versioned and stored in Github, and Azure DevOps can implement them to provision the resource (s3 bucket) in AWS. It is the tool integration that I am focusing on here.
In this post, I list the steps I followed to integrate three incredible tools to achieve the objective of IaC, namely –infrastructure defined as code that is versioned in a source control repository, automated deployment, repeatable process, and consistent environment state.
The tools used are Microsoft Azure DevOps, HashiCorp Terraform, and Amazon S3. I am assuming that the reader is aware of what these tools are. However, just so we are all on the same page, Terraform is an IaC tool allowing us to write infrastructure configuration files to manage various cloud services. It is a declarative language that follows idempotency. Azure DevOps is Microsoft’s solution to a software development process that aids collaboration, traceability, and visibility using components like Azure Boards (work items), Azure Repos (code repository), Azure Pipelines (build and deploy), Azure Artifacts (package management), Azure Test Plans along with a plug and play module to integrate a large number of third party tools like Docker, Terraform, etc. And lastly, Amazon S3 is a storage solution from the AWS cloud provider to store and retrieve artifacts -files, and folders.
Before we begin with the automation process, there are a few prerequisites we need to have. These are:
-AWS IAM user credentials (access key and secret access key) with permission to create a bucket in an AWS account
-existing Amazon S3 bucket details to store the remote state
-a Github repo containing Terraform configuration files
-an Azure DevOps project
Step 1: Authenticate Azure DevOps to the GitHub repo, where the Terraform configuration files are stored.
On the Azure DevOps portal, at the bottom left corner, click on the gear icon. That launches the project setting page. Navigate to the middle of the list where Service Connections are listed under Pipelines.
Click on the “New service connection” button (top right corner) and search for GitHub in connection types.
Select GitHub and click on Next. That loads a new pane requesting details to grant authorization. Click on Authorize to provide credentials to the Github repo.
Step 2: Install Terraform extension from the Azure DevOps marketplace for the Azure DevOps team project.
This extension adds Terraform tasks and service connection requests that we will use later in pipelines.
Step 3: Authenticate Azure DevOps to AWS for Terraform service connection.
Navigate back to the bottom left corner to launch a service connection (similar to Github), and this time search for Terraform.
Note: “AWS for Terraform” option won’t be visible if Terraform extension in Step 2 above is not installed for the project.
Click on “AWS for Terraform” and then next.
That loads a new pane requesting IAM (access key and secret access) details that Terraform will use to provision AWS resources. Provide these values that are available as prerequisites. The region value is the location where the state file of the terraform configuration will be stored. Please provide a name for the service connection and save it.
Step 4: Create a release definition
In this step, we create a new release pipeline. From the Pipelines menu on the left, click on Releases. This launches a new page to select a template. I prefer an Empty job, to begin. Then, under Agent job search, click on + to add tasks. This opens a new pane on the right. Enter Terraform to search.
There are two tasks available, and we use both.
Step 4a: Select Terraform tool installer and add that step to the Agent job.
Then select the Terraform task. Once added, select task (Install Terraform 0.12.3) to close the task pane on the right.
The screen looks as below.
I was working with version 0.13.5 of Terraform, and hence I updated that.
Step 4b: Terraform init
The first command is “terraform init,” and hence I selected that command from the drop-down. Moreover, I was using the AWS provider in my code, and therefore I choose that from the Provider list. The Configuration directory is the location where this task runs. So here, I provided the path to the terraform configuration files in my repository.
Further down, we are requested for the AWS backend configuration details -the AWS connection to use, the bucket to store the terraform.tfstate file and the key, which is the relative path to the terraform.tfstate file inside the bucket. If you recollect in Step 3, we created an AWS for Terraform connection. We provide the name of that connection, and the name of a pre-existing bucket that I mentioned should already exist in the prerequisites.
I also updated the display name of the task from Terraform : aws to Terraform : init
Step 4c: Terraform plan
If you’ve used Terraform, you know that after init (initialize), we plan and then apply the plan. I replicated that by adding two more Terraform tasks. For the next task, update the display name to Terraform : plan and select the command -plan from the drop-down. Populate the configuration directory -this is the path to the terraform configuration files. Under additional command arguments, I am passing some variables.
These are the variables that are referenced in the terraform configuration file: the region,
access_key, and the
secret_key. I am also storing the plan file in a
demo.tfpan file, which is helpful in the next step (terraform apply).
These variables, particularly the
secret_key, should not be added to the repository. I declared them in the terraform configuration files and pass the values using Azure DevOps. These variables can either be stored as a variable group in a library or as variables in the release definition. I have stored these variables in the same release definition.
Step 4d: Terraform apply
Next, we populate the last terraform task. This is the apply step. The command is: validate and apply. I also pass additional command arguments: “demo.tfplan” This is the output of the plan command that we ran in the previous step.
And that brings us to the end of the release definition, and we can save that. I ran the release definition manually and got a green build (succeeded).
Navigating inside the logs, I got a detailed picture of the steps that Azure DevOps executed.
And on my AWS console, I could see Azure DevOps created an S3 bucket with the name terraform-bucket-17097 using the terraform configuration. There is also the skundu-terraform-remote-state bucket that I created earlier (pre-requisite) where the terraform.tfstate file for the newly created resource (an S3 bucket with the name terraform-bucket-17097) is stored.
Note: ignore the other bucket (terraform-bucket-25676). That is not relevant to this blog.
Now, I could run the Azure Pipeline multiple times, and in the end, I would have the same bucket -idempotency. I have the resource creation automated using Azure DevOps. This process is repeatable. I have the infrastructure provisioning process in a code that is stored in source control. I have the credentials secured. This process ensured that the state of the environment (only an S3 bucket) is consistent. Although I have the release definition set to manual for this demo, we can change that to be auto-triggered with each check-in (with appropriate guard-rails where required). These are the benefits of infrastructure as code.
I hope you found the post informative. Please do reach out if there are any questions I can answer.
GitHub Code Repository: LearnTerraform/Get-Started-AWS-009-CI-CD-AzureDevops/
One thought on “Azure DevOps and Terraform to provision Amazon S3”