My objectives was to use Terraform to provision resources across environments in the AWS cloud infrastructure. Following a typical CI/CD model, my idea was to build once and deploy multiple. Since there was nothing to build in terraform, I wanted to make sure the same terraform code was applied across all environments. So I thought, let’s run
terraform init and
terraform validate inside a CI build and package that into an artifact. The package would also contain the
.terraform folder where versioned modules and provider plugins referenced by the terraform configuration are stored. By including the
.terraform folder, I ensured that the configuration was the same across all environments. There are pros and cons on whether to include that folder in the build artifact. Then, on the CD side, I thought that I’d run
terraform plan and
terraform apply on the same package. If you want to read, I ideated on that approach in my previous note and listed the limitation. The solution was to use terraform workspace.
There is already a good amount of information related to terraform workspace available, and hence, I am keeping out of discussing the concept behind it. In this (part one of a two part) note, I explore the idea of integrating terraform workspace usage with Azure DevOps pipelines. In part two I have a detailed walkthrough of the automation process. If you are new to terraform workspace, I’d suggest reading this article and also familiarizing yourself with the workspace commands.
Here is another interesting note on workspace internals.
Let me expand on the use case I was working on to achieve this integration objective. I wanted to provision an AWS VPC and subnets for three environments -Dev, Test, and Prod. And I want to use the same Terraform configuration (files) to provision these resources. These environments belonged to different AWS accounts… So Dev was a separate AWS account, Test was a separate AWS account, and so was Prod. Also, I used Azure DevOps pipelines for automation (in part two. I will provide the URL here once that is ready).
In all my previous terraform projects, I did not have to access across AWS accounts. The idea, in this project, was to automate terraform using credentials of a user in an AWS separate account (let’s call it Automation account) and be able to provision resources in -Dev, Test, and Prod AWS accounts. AWS IAM has a concept associated with such a use case, which creates a trust relationship between two accounts. I have a detailed note on that at Creating IAM assume-role relationship between two AWS accounts.
I have committed the code into my GitHub repository –Working-with-Terraform-workspace-and-AWS.
While navigating through the code, there are a few necessary concepts I want to mention about.
Using terraform.workspace local variable:
In the previous links on terraform workspace, per HashiCorp Docs, every initialized working directory has at least one workspace (If you haven’t created other workspaces, it is a workspace named default ). Hence after initialization, the value of
terraform.workspace is `default`. Type terraform
workspace list to check which workspace is selected. The selected workspace will have a * against its name. However, while using terraform, you can select a specific workspace before you proceed with managing resources if there are workspaces already created. The value of
terraform.workspace is the value we set at last
terraform workspace select $(workspacename) or
terraform workspace new $(workspacename)
At any particular time, there is only one workspace that is active in a repository.
Map variable type and associated lookup() functions:
In the “variables.tf” source file there are three variables of type = map. These are key-value pairs. The lookup function () is used to pass the key to the variable and receive the value/s in return. In this case, we pass the value of
terraform.workspace as key, and hence all the variables have values specifically for Dev, Test, and Prod. For this reason, it is suggested that we create three separate workspaces named -Dev, Test, and Prod.
I haven’t seen a better article than the one below to explain this concept. I would highly recommend going through that to understand how to use AWS assume-role to provision resources in multiple AWS accounts.
In this project, in the “provider.tf” source file, you will see that I am selecting the
role_arn based on the selected workspace. That way, I can map roles from separate accounts to separate environments. For example, the user in the Automation account (which is the Trusted account) will have permission to take on a role in one of the Trusting accounts -Dev, Test, and Prod, depending on the value of the key (
terraform.workspace) we pass. I then used the Automation account’s user access and secret key to provision resources across the three AWS accounts.
If you want to give this a try, please go through the steps below to work with the code.
Step 1: Create a Trust relationship between multiple AWS accounts.
In this project, we have three environment accounts and one Automation account. Create a trust relationship between the Automation account user and the three environment accounts. I have a separate note on how to create that at –create trust relationship between AWS accounts.
Step 2: Create an S3 bucket and a dynamoDB table in the Automation account to store the remote state.
If you are new to this, please go through my note on the same at –create terraform pre-requisites.
Step 3: Initialize with
After you get a copy of the code, populate the variables correctly and then initialize the code with the command as shown below.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|terraform init -backend-config="bucket=$(remote-state-bucket-name)" -backend-config="key=tf/terraform.tfstate" -backend-config="region=$(region)" -backend-config="access_key=$(access_key)" -backend-config="secret_key=$(secret_key)" -no-color|
Step 4: Check workspace
terraform workspace list to identify which is the current workspace. There is an asterisk (*) next to the current workspace.
Step 5: Create a new workspace
terraform workspace new Dev to create a new Dev workspace. OR, select a workspace if it is already created using
terraform workspace select $(workspacename)
Step 6: Run Terraform trio
terraform validate -> plan -> apply to see how the resources are provisioned in the Dev AWS account.
To provision resources in the Test and Prod account, we must first change the workspace from Dev to Test or Prod. The below steps are to provision the resources in the Test account.
Step 7: Create a new Test workspace and select that
terraform workspace new Test
This way, we shift into the Test workspace. For confirmation, type
terraform workspace list and see where the asterisk is (*).
Step8 : Run Terraform tri0
terraform validate -> plan -> apply to see how the resources are provisioned in the Test AWS account.
Similarly, you can run the same sequence of commands to provision resources in the Prod AWS account.
The S3 bucket where the state file is stored has a different structure. Now you will see there is an extra level env: to separate the resources from one workspace to another.
In the above configuration, the S3 bucket name value in backend.tf was set to “terraform-project4-vpc-peering”
Resources created in the Dev account do not conflict with the Test or the Prod account or vis-versa. Meaning, using the same terraform configuration, we can now create and manage resources in multiple AWS accounts. Stated differently, Dev and Test account resources do not get destroyed if we provision resources in Prod account.
Also, using AWS assume-role feature with trusting relationships across multiple AWS accounts ensures we are not using separate AWS credentials to work with different AWS accounts and instead are working with only one set of credentials.
I hope you found this note useful, and let me know if there are any questions or suggestions.
If you are interested in reading about working with terraform workspace in an automated scenario, check my next note –CI/CD of Terraform workspace with YAML based Azure Pipelines.