Download Amazon S3 bucket into an Amazon EC2 instance in 5 steps using user data and Terraform

I required a few files and folders on an Amazon EC2 instance as part of the provisioning process. So, the objective was to upload these files and folders into an Amazon S3 bucket and download them from the Amazon EC2 instance with the assistance of the user data script and Terraform.
Note: As a reader of this note, I assume that you know Amazon S3 and Amazon EC2 instance and their usage. If you want to know more, please refer to AWS-Docs on Amazon S3 and Amazon EC2.

I broke down this use case into five minor use cases. The end objective was to have an Amazon EC2 instance with specific files and folders. Amazon S3 is a storage solution from AWS, and hence I stored the files and folders there. As part of provisioning the Amazon EC2 instance, my approach was to download the files and folders from Amazon S3 using the AWS CLI. Provisioning an Amazon EC2 is done using the user data script, so I required that. Also, I needed AWS CLI installed on the Amazon EC2 instance. Moreover, the Amazon EC2 instance also required permissions to access the Amazon S3 bucket. Putting all these together, the sequence of steps were as follows:

Step 1: Provision an Amazon S3 bucket and store files and folders required by the Amazon EC2 instance
The Amazon S3 bucket was already created for this specific use case and so I uploaded the files stored in the local repository (files folder).
59-Image-2
Step 2: Provision an Amazon EC2 instance and have a user data script ready
When taking this approach, there are multiple factors to consider  -should the script be run multiple times or only once? What is the sequence of steps to be executed via user data? What is the size of the user data script? I covered them separately  –Working with Amazon EC2 user data and Terraform.

Step 3: Attach an IAM role to the Amazon EC2 instance
The Amazon EC2 instance required a set of permissions to communicate with the Amazon S3 bucket. I described that in detail at –Attach IAM role to Amazon EC2 instance using Terraform.

Step 4: Install AWS CLI on the Amazon EC2
I could achieve this objective in two ways if the underlying instance were running Windows. Install the AWS.Tools module for PowerShell or install the AWS CLI. I have two separate notes on them that you may find useful. Install AWS.Tools module. OR Install AWS CLI.

Step 5: Call the S3-CopyObject AWS command
This is the final step, and I completed that via the command shown in the image below in the user data script.

I have a working copy of the Terraform code in my repository at  -add-s3-access

Another critical aspect of working with a user data script is to provide clear logging. That comes in handy when dealing with error conditions when building the logic. Also, identify where the user data script (PowerShell) and the logging information are stored. I made these considerations and covered them in the code version stored in the GitHub repository. I have an image of the log file of the user data execution below.
59-Image-3
Comparing the user data script and the log file allows you to understand the logic and process flow. After a few minutes of the terraform apply command, I logged into the Amazon EC2 instance and verified that the two files from the S3 bucket were available in the Amazon EC2 instance.

I hope this note was useful. Let me know if you have any questions or suggestions.

2 thoughts on “Download Amazon S3 bucket into an Amazon EC2 instance in 5 steps using user data and Terraform

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s