Reusable EC2 Instances Using Terraform Modules | by Michael Cassidy | Jul, 2022

A guide to mastering EC2 restructuring

In this project, we will go through a simple restructuring of an EC2 Instance template forked from GitHub. We will recreate the code using modules to make it more reusable and adaptable to a development or production environment. We will also go through how to remotely store our Terraform state file in Terraform Cloud to ensure the state data can be shared between all members of a team.

I will be using Visual Studio Code as my IDE, but feel free to use Cloud9 or another IDE of your choice. The steps in this tutorial will differ slightly if you choose to use a different IDE.

  • AWS Account
  • Proper Permissions for your user
  • GitHub Account
  • GitHub personal access token
  • Terraform installed on your IDE
  • AWS CLI installed and configured on your IDE
  • Terraform Cloud Account

If you would like to fork the repo from the following link, now is the time to do so. If not, simply copy the main.tf file and paste it into your IDE, and skip the next few steps:

https://github.com/Michael-Cassidy-88/terraformec2

Next, back in your IDE, you will need to clone the repo. For VS Code, simply click on the Source Control icon, and select “Clone Repository.”

Copy the repo URL from GitHub as shown below, then paste it into VS Code to clone the repository.

I suggested creating a new branch before proceeding. Do this by clicking on the little checkout branch icon in the bottom left corner of VS Code, and then create a new branch:

Let’s check out the code!

As you can see, we are limited in the usage of this code. We can only create one EC2 instance, in one region, using one AMI, and one instance type. We will use pieces of this file to create a new filesystem. First, let’s make a few new folders and files.

Within your current working directory, make two folders. One called modules, and the other terraform.

The terraform folder will hold our root configuration. In the terraform folder, we will add the following folders. These folders will serve as specific values ​​that we will implement into our code. This is done so that different teams could be working on different projects, all utilizing the same code. Just rename the folders to suit your needs:

terraformdevus-east-1t2.micro-ec2-deployment

It may look complex at first glance, but as you will soon see, the folders show that we will be in a development environment (dev), in the us-east-1 region, and utilizing the t2.micro instance type. The final folder will be the folder where all the root files reside.

In the modules folder, we will keep it less complex. Simply add one more folder that is a replica of the 0-ec2-deployment folder. When you have completed the folder layout, it should look like this:

Create a .gitignore file in your AWS-Instance directory. This way, when we push our code back to GitHub, the extra files that we don’t need, or don’t want to share, will not be sent.

Next, we will create the following files in the terraform deployment directory: main.tf and backends.tf.

In the modules deployment directory, make files called main.tf and variables.tf. The final filesystem will look like this:

In order to connect to Terraform Cloud so that we are able to store our Terraform state file remotely, we will create a new workspace, and then log in from our IDE. To do this, head over to your Terraform Cloud organization. Create an organization if you do not have one. In your organization, create a new workspace like the following:

We will choose a CLI-driven workflow so that we can execute our Terraform commands from our IDE.

Name your workspace, and create it:

On the next page, we will copy the information given to us in the example code, and paste it into the backends.tf file in our IDE:

Before we forget, let’s enter terraform login in our terminal. If you haven’t already, change directories into the terraform root deployment folder. The terminal will prompt you to confirm that you want to proceed.

Upon entering yes, you will be brought to a “Create API Token” screen where you will create an API token, and receive the credentials you need to log in to Terraform Cloud from your IDE.

Make sure to save the token somewhere safe.

Once you paste in your token to the IDE terminal, you should see the following display:

Let’s get to the code. We have the backends.tf file ready to go. Let’s go into the main.tf file in our terraform root deployment directory. Enter the following code:

The provider block will let Terraform know we are using AWS. The locals block will set our current working directory, and the following values ​​will be assigned according to the name of the corresponding folder.

Since we are in the 0-ec2-deployment folder (index 0), if we step back one folder (index 1), then we arrive at the instance type (t2.micro). You can see that as we step back into the folders, we access the information we define in the locals block.

The only variable we could change in this configuration is the number of instances we want to launch. I have set mine to 2.

Now that the root configuration is complete, let’s head over to the modules. Let’s start with variables.tf. Within variables.tfenter the following code:

Here we are defining variables that we will pass to the main.tf module file from the main.tf root file.

Next, in the modules main.tf file, enter the following code:

Here we will find our AWS instance resource. The data block utilizes the AWS systems manager parameter store to find the latest Amazon Linux 2 AMI. AMI IDs change, and we want to be able to keep our Terraform code reusable.

We can tag our EC2 instances so that they show the count index plus 1 when we later observe them in the AWS console.

Let’s finally run our Terraform commands and see what happens. Run the terraform init command:

Next, terraform planthen terraform apply:

Let’s take a look at Terraform Cloud. If you go into the workspace you created, you should see the two instances and AMI in the overview section:

If you go into “States,” there you will find the Terraform state file we had stored remotely:

“version”:4 “terraform_version”: “1.2.2” etc.

You will not see the state file in your local filesystem.

In the AWS console, check out the running EC2 instances. You should see the tags we defined for each instance, as shown below:

Terraform_EC2–1 Terraform_EC2–2

Now, let’s terraform destroy it all so we don’t end up being charged for our running instances. Notice that the terraform state file persists in Terraform Cloud.

Once the destruction is complete, we can push our code back to GitHub.

Here is the .gitignore file contents for the file we created earlier:

Click on the Source Control icon on the left-hand side in VS Code, stage the changes, and enter a message to let the GitHub repo know that you are making changes:

Now press Ctrl Enter to commit your changes. Finally, in the VS Code terminal, enter the following command:

git push --set-upstream origin <CURRENT_BRANCH>

The branch I created was the Terraform_Branch, so I will use that in the command. You may need to enter your GitHub username and personal access token to make the push happen. You will need a personal access token that you can create in the developer settings under your profile settings

You should see a compare and pull request back at your GitHub repo:

Merge the pull request to your main branch and you should see the changes made in your repo:

Congratulations. We have created reusable Terraform code in a logical filesystem to launch simple EC2 instances, all while storing our Terraform state file remotely in Terraform Cloud.

Thank you for reading.

Leave a Comment