How To Manage Multiple Docker Containers at Scale | by Matt Bentley | May, 2022

A guide for managing Docker containers at scale, including development, continuous integration, environment promotion, and DevSecOps


With the rise of containers and container orchestrators, Docker has become a must-have skill for all modern-day programmers. This article is for anyone who builds custom container images from their application code, whether you are using Kubernetes as a container orchestrator or bare metal servers.

Here I will be focusing on how lots of different container images can be managed throughout the whole development and release cycle. We will cover:

  • Building multiple images at once
  • Running a set of images for testing purposes
  • Executing commands on multiple images. I will show how this can be used for DevSecOps processes such as image scanning
  • Promoting images from one registry to another during the release process

I’ll assume that you have some knowledge of Docker and Docker Compose.

If you just want to jump straight to the good stuff, here is a reference GitHub repository for the techniques explained in this article.

Before we get started, here are some goals and principles of this article is trying to meet:

Fully automated

Our processes for managing container images should be completely automated.


The beauty of containers is that they can run across lots of different hosting environments. The process for managing images should be applicable no matter what technologies you are using for CI/CD.

Ideally, we want to make as few changes as possible to our scripts and infrastructure/pipelines-as-code when we are adding new images/services to our solutions.

Build Once

One of the most important benefits of containers is that they will run consistently across different hosting environments. We should be taking full advantage of this by only building our images once and promoting those images through each environment when we release. This will that the same image is tested to ensure production that has already in all lower environments.

Container Image Promotion

A sample project is provided to help demonstrate some of the techniques in this article. The project consists of the following applications:

  • .NET Blazor Webassembly web application
  • .NET Worker agent console application
  • SQL Server database

The application consists of a simple Web UI for retrieving random weather forecasts from a SQL Server database. The agent service updates the forecasts every 10 seconds and performs the initial database schema migration when started.

A sample Azure DevOps pipeline is provided; However, these techniques can be used for any CI/CD process as they are all command-line driven.

My team has found that a docker-compose.yml file is a perfect place to declare additions and changes for your images. It is well understood and can be used by many container management tools such as Docker Desktop and Podman, Docker Engine.

Even with a fully automated CI/CD process in place, new services and their associated image names must be specified somewhere and your project’s docker-compose.yml is a pretty good place for that.

Docker Compose profiles are a relatively new addition, making it much easier to work with different configurations at each point in the development cycle. Running docker build, run and pull/push commands through Docker Compose is fairly well understood; however, a docker-compose.yml file and its associated profiles can also be used to run additional actions using custom bash scripts.

The sample project has the following profiles, which can be used to get different activities running quickly:

  • dev: Used for local development. Local resources such as a database or message bus can quickly be spun up.
  • test: Test all services for the project together.
  • ci: Used to build and push custom images from application code in the continuous integration process. This could be split into more granular profiles if you wanted to run the builds across different jobs in parallel.

A profile can be used to build a selection of images for your project. The following command will build all of the services for the ci profile:

docker-compose --profile ci build

This can be used in your Continuous Integration (CI) process to build all of your images. As services are added to your docker-compose.yml file, they will automatically be picked up in the CI build without requiring any changes to your CI code.

I have found Docker Compose profiles useful for splitting long-running image builds from the rest of the other services. Generally, web application image builds using lots of JavaScript npm libraries can take a long time, so it is best to create separate profiles for running them in parallel.

Building Profiles in Parallel in a Continuous Integration Process

Profiles can also be used to run a selection of services for your project. The following command will run all of the services required to test the sample application locally using the test profile:

docker-compose --profile test up

The services can be stopped and containers removed by running:

docker-compose --profile test down

Local Development

For developing code, a different set of services may be needed. I often find it useful to have a dev profile for running local resources such as a database or message bus. The following command will run only the database for local development:

docker-compose --profile dev up

Data Persistence

The docker-compose.yml file provided creates a volume so that data in the database will persist when the database container is stopped and deleted. To stop the database and delete the data volume, the following can be used:

docker-compose --profile test down -v

Now that our services are already specified in our docker-compose.yml file, we can go even further by running custom actions against their associated images.

The script can be used to extract image names from a Docker Compose file based on a profile or image filters. The following command prints the image names from the ci profile:

./pipelines/scripts/ -p ci

Note: your docker-compose files must have Unix line endings to work with the provided bash scripts. Use VS Code or dos2unix to convert from Windows to Unix line endings if required. The scripts should be run from a bash terminal if you are using Windows try using GitBash.

The output from the above command and additional options for filtering which images are extracted is shown below:

The main logic for is shown below. Two arrays are created for the images and their associated profiles. A few different filters are run on the image names and profiles based on the parameters provided.

Main Logic from

Now that we have a nice way of retrieving our images by a particular profile, we can chain this to additional scripts.

The previous script helps us with our first two goals; This next script will allow us to build our images once and promote those images through our different environments as we release.

The following process is performed before deploying code to each environment:

  • Pull images to promote from the previous environment
  • Use Docker tag to change the registry name on the images to the promotion registry
  • Push the promoted images to the current environment registry

The script uses the script to extract the images required for promotion and then uses Docker tag to change the registry name of the images locally. The following command will promote images from the ci profile with the 1.0.0 image tag and change the registry name from to

./pipelines/scripts/ -p ci -t 1.0.0 -r -u

This script can be used in a Continuous Deployment (CD) process, as shown below:

Promoting Images When Deploying From a Continuous Deployment Process

The main logic from the is shown below. The extracted images from the docker-compose.yml file are looped over, and the registry name and tag are replaced with the required values.

Main Logic from

Now that we have a nice way to extract groups of images for our project, we can run custom commands against them for processes such as DevSecOps. Generally, most security and automation tools are CLI-based, which makes it easy to chain them to the script.

The script can be used to run a custom command against extracted images from a docker-compose.yml file. Your provided command must contain @imagethis will get replaced with the name of the extracted image.

The following command shows an example of running a container scan using Dive on each of the images in the ci profile in the registry with the 1.0.0 tag. Dive is used to scan images for wasted space. However, this could be swapped out for any other container scanning or automation tool:

./pipelines/scripts/ -r -p ci -t 1.0.0 -c "dive @image"

The output from Dive scans in a Continuous Integration process is shown below:

Running Image Scans From a Continuous Integration Process

The main logic for is shown below. The extracted images from the docker-compose.yml file are looped over and the provided command is executed against each of them.

Main Logic from

The scripts provided can be used in any CI/CD process as they are command-line based. An example of an Azure DevOps CI/CD pipeline can be found in azure-pipelines.yml.

Sample Azure DevOps CI/CD Pipeline

The Build stage is responsible for building, scanning, and pushing the images. The images are originally pushed to a Development registry, and the following deployment artifact is produced:

Deployment Artifact

The provided scripts and docker-compose.yml file are used in the Deploy stages to promote and release images. If you are using a container orchestrator such as Kubernetes, then your deployment manifest files or Helm charts should also be added to your deployment artifact.

Leave a Comment