Docker: Build a Custom NGINX Image and Push It to AWS ECR and GitLab | by Michael Cassidy | Jun, 2022

This project will give a walk-through on how to build a custom NGINX Docker Image through AWS Cloud9 using a Dockerfile and a docker-compose.yml file. We will push that image to AWS Elastic Container Registry (ECR). We will also push the Image to GitLab, and finally, we will set up a CI/CD pipeline in GitLab.


  • AWS Account with IAM permissions
  • AWS Cloud9 IDE with AWS CLI installed
  • GitLab account

First, log in to your AWS console and go to the Cloud9 service. Create a new environment. I will name mine Docker NGINX. For the platform, select the Ubuntu server. Leave the rest of the settings default. Create the environment.

Search the web for your IP address and copy it down. Go back to the AWS console, and locate the EC2 Service. Select the Cloud9 instance that is running, and then click on your security groups.

Next, we are going to edit the inbound rules. Add all traffic and paste in your IP address. Select the cidr block with the 32-bit address. Save the rules.

Back in the Cloud9 environment, we are going to set up our filesystem. Let’s start with the Dockerfile. Right click on the first environment folder, and create a new folder, called Docker. Within that folder, create a new file. The file must be named Dockerfile. You should see a little Docker whale next to the file now.

Let’s continue creating the files by adding an index.html file, where we will customize our image to tell us the date and time our container is. Finally, add a docker-compose.yml file so we can easily create an NGINX image.

Change directories into the Docker folder. Docker, the open-source containerization platform, comes installed on AWS Cloud9. To see the version, type:

docker --version

Next, we will install docker-compose. To do this, type:

sudo apt install docker-compose

Once that is installed, go to the docker-compose file. We will write the following to create an NGINX image in a Docker container:

Let’s see if we can create and start the container. We will use the “-d” to run the container in detached mode:

docker-compose up -d

It should take less than a minute for your container to be up and running. To verify that the container is running:

docker ps

Your screen should look something like this:

Now let’s curl the local host:

curl localhost:8080

You should see something like this in return. We are going to use the html content and modify it to create a custom NGINX web server:

Open up the index.html file. I will enter the following, specifically noting the date and time of the container creation to be displayed:

Now we will go over to the Dockerfile. We will add the following:

  • Nginx looks in the /usr/share/nginx/html directory inside of the container for files to serve. For this reason, we will copy the index.html file to /usr/share/nginx/html.

We will run:

docker-compose down

This will remove the NGINX container. We will edit a few files before we start it up again. Cloud9 comes with some repositories preinstalled. If you type:

docker images

You will see we have some unnecessary images for this project.

Let’s delete all the images to get a fresh start for our custom NGINX image. The following command will delete them all:

docker rmi -f $(docker images -q)

We have a clean slate. Let’s run docker-compose again, but before we do, edit the docker-compose file to look like this:

The section “images”, is now replaced with “build.” This will act as our docker build command, which will access the Dockerfile. Make sure you are in the same directory that all your files are in before we run the docker-compose.

Run the following command to create and run the new docker image:

docker-compose up --build -d

I can take a look at the running containers, and images:

Now instead of the curl localhost command, let’s visit the container using our Cloud9 IP address. To do this locate the share button in Cloud9, and copy the IP address:

In a new tab, type the following into the address bar:


This should take you to your container:

Let’s push our new image to AWS ECR. Find the Elastic Container Registry service in AWS. Create a repository. We will leave the visibility settings as private. Name the repository:

Leave the rest of the settings as is. Create it. Your repository should be created. Click on “View push commands.”

Back in Cloud9, if you don’t already have the AWS CLI installed, type the following:

pip3 install awscli

You may need to configure the AWS CLI as well with:

aws configure

Next, input the following from the push commands page, command #1 :

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin

You should receive the following:

Next, we will enter in command #3. We can skip #2 since we already built our image:

docker tag docker_nginx:latest

Finally, we will push the image to the AWS ECR:

docker push

Your screen should look like this:

If we go back to the AWS ECR, click on your repository, and you should see your image:

Now that we have our container registered with AWS, why not push it to GitLab where we can run the Docker image in a pipeline.

Login to your Gitlab account, or make one, and login. Click on “New Project.” You might have to create a group before you can create a project. My group is called michael_docker. Name your project, and unselect “Initialize repository with a README”.

Before we do anything else, we are going to create SSH Keys to our GitLab profile. To do this, in Cloud9, type the following commands:

ssh-keygen -t rsa -b 2048 -C "<comment>"

Accept the suggested filename and directory after the output. Next, you will be prompted to enter a passphrase.

To view your public SSH keys, type:

cat /home/ubuntu/.ssh/

To view the private SSH keys:

cat /home/ubuntu/.ssh/id_rsa

Go back to GitLab, locate your username icon on the upper right-hand corner of the page, and click on “Edit profile”. Next, click on “SSH Key” in the left-hand column. Add the public SSH key you generated to your GitLab account, and keep the private SSH key somewhere safe. Add the key.

Next, click on your new project, and go to the left-hand column. Hover over
“Packages & Registries” and click on “Container Registry.”

You can see some commands to login to the registry. First lets SSH into our GitLab account from Cloud9:

ssh -T

Enter the passphrase you created with your SSH Keys. Your screen should look like this:

Now we will enter the first command on the container registry page:

docker login

Enter your username as it appears in GitLab, then your GitLab password.

I was in the environment directory, but now make sure to switch back to the Docker folder if you aren’t already there, and enter the second command in the registry:

docker build -t .

If we run docker images, we will see all the images we’ve created so far:

Finally, we will push the image to our GitLab registry:

docker push

If we head back to GitLab, and refresh the Container Registry, we will see our image is there:

Since we have come this far, let’s just get it over with and set up a CI/CD pipeline in GitLab.

According to GitLab:

GitLab CI/CD is a tool for software development using the continuous methodologies.

Pipelines are the top-level component of continuous integration, delivery, and deployment.

Here is the .gitlab-ci.yml file that you need to create the CI/CD Pipeline:

Back in Cloud9, create a new file under your Docker folder, called .gitlab-ci.yml. Paste the code above into your Cloud9 file:

Next, enter the global user commands from GitLab:

Next, enter the commands below:

If we refresh our GitLab repository, we should see the pipeline being built, then ultimately passed!

If we open up the pipeline, you will see it building, until it succeeds.

You may remove your containers and images from Cloud9. Use the following commands to do so:

docker rm -f $(docker ps -a -q)
docker rmi -f $(docker images -q)

Congratulations. You created a custom Docker Image and pushed it to AWS ECR and GitLab. You also built a CI/CD pipeline in GitLab.

Leave a Comment