Spring Boot — Continuous Deployment on Kubernetes With ArgoCD and GitHub Actions | by Zalán Toth | Jun, 2022

Deployment made easy

Nowadays DevOps, GitOps, Continuous Deployment are hot topics. Sometimes it seems like magic but actually most parts of it are really easy and everybody should adopt it. Automatized pipelines give us safety and save lots of time. All tools we use in this article are free.

In this article, we are gonna create a very basic pipeline using GitHub Actions which will be triggered by all master push events. It will perform project testing, versioning and building. Normally we would like to run the tests on pull request open event so no untested code can be merged into the master but for the sake of simplicity I’ll omit it in this article.

We are gonna set up ArgoCD in our Kubernetes cluster (I’m gonna use the one provided by Docker Desktop). ArgoCD will be monitoring our deployment GitHub repository and deploy every changes into the cluster using DockerHub as the source of the images.

Delivery Pipeline

First, we have to create two repositories on GitHub. The names are up to you.

I’m gonna call them continuous-delivery-application and continuous-delivery-manifests respectively.

The first one is a Spring Boot project with a REST endpoint and a Unit Test. It can be easily generated by Spring Initializr.

After unzipping the project, open it in your favorite IDE (for example IntelliJ).

Let’s create a REST endpoint that returns the list of User objects. We are gonna use Kotlin coroutines for the reactive endpoint.

The router emits two users and serializes them into JSON format. You can test the endpoint by calling the following URL:

http://localhost:8080/api/v1/user

Then add a simple unit test for the service:

Now we can add the first CI step to our project. Let’s create a new directory in the root folder of the project.

The name has to be .github and in this folder, we have to create another directory called workflows.

This is the mandatory naming convention for Actions defined by GitHub.

In the workflows folder, we create a new file called push-to-master.yml. This is our deployment description manifest.

Every push on the master branch will trigger the workflow. The job makes the unit tests run. The write permission is not necessary at this moment but in the next job.

This job runs on Ubuntu and in the first step it checks out the project from the repository.

After this, it installs JDK 17 and at last, it calls the Gradle’s test command for executing the unit test we defined earlier.

Before pushing the project make gradlew executable using the following command.

git update-index --chmod=+x gradlew

As we are going to setup semantic-release which uses Conventional Commits, let’s start all commit messages with ‘feat: ‘ following by the commit message. I will explain it later.

After the project was pushed into GitHub we can check the pipeline on the repository’s Actions page.

Successful unit test

Next we are going to add the semantic release plugin. This will automatically updates the project version.

SemRel uses the Conventional Commits tags for determining the next version. We use feat which updates the minor version and fix for path update.

There are many other tags, please check the documentation.

First of all, we must create gradle.properties file in the root folder and add the following line:

version=0.0.1

And update the version in the build.gradle.kts file to:

version = project.findProperty("version")!!

SemRel is a NodeJS plugin so we have to create another file called package.json and add the necessary semantic release plugins into it.

The plugins will update the version in the gradle.properties file and commit it back to the master along with the automatically generated changelog.

Before we add the new job to the workflow let’s run the npm install command.

This will generate package-lock.json file.

Extend the push-to-merge.yaml manifest with the new job.

This job will check out the master branch and runs the semantic-release step. The git plugin needs to access our repository.

Actions automatically inject our token into the workflows so we can get it and add as an environment variable using the secrets.GITHUB_TOKEN template variable.

The needs structure ensures that the run_unit_test job runs before this job starts. Without it those two would run concurrently.

Push the modified files and let the workflow runs (do not forget to add the fix/feat and colon prefix for the commit message). When the pipeline has finished we can pull the master branch and check the gradle.properties file and the changelog.md as well.

Now we have two jobs run
Auto changelog generation

We use Jib for building Docker container from the application.
Put the plugin and the configuration into the build.gradle.kts file.

We use Amazon Coretto Java 17 base image. For the image name prefix use your own account name.

For pushing it we have to provide the DockerHub credentials. We will inject them as environmental variables using GitHub Action Secrets. The image tag gets its value from the project version. This will be calculated by the semantic release plugin.

The container exposes the 8080 port for traffic and 9000 for management ports for liveness and readiness probes. The mainClass name has to be the fully qualified name of the class containing the main function and in the case of Kotlin, we have to add Kt at the end of it as Kotlin will generate the main class on this name. The jvmFlags helps us using better the container’s memory.

Next, we have to modify the application.yml or application.properties file for bright graceful shutdown and health probes on the management port. This will be necessary on Kubernetes.

We can put the release job into the workflow file:

Semantic Release step

As you can see it is similar to the test job. Jib will create and push the image into DockerHub but as I said we had to provide our credentials. You can do it on the Settings tab.

Select the Secrets/Actions from the left menu and use the New repository secret button.

After it is done we can push the modifications. When the pipeline finishes the correctly versioned image should be on DockerHub.

The second project holds the Kustomize manifests for Kubernetes deployment. Kustomize is a native configuration management application for Kubernetes. Its purpose similar to Helm but template free and kubectl contains it by default. It can group a bunch of resources and deploy them together.

Customize defines base templates and environment patches. Let’s create two folders it this project’s root. The first one is called base and the second one is overlays.

We are gonna create a simple deployment for the application and a cluster ip for accessibility and load balancing.

Add deployment.yml to the base directory.

Deployment

This will create a Pod from the image. We also set the health probes and the resource limits.

We create node-port.yaml With the following content so other applications within the cluster will be reachable by other Pods at this address and it also binds it to localhost on the 30001 port. In real application you should create ClusterIP and Ingress controller instead.

Nodeport

Customize uses kustomization.yaml for its operations. We add the resources and also the base image. We are going to modify the newTag parameter from our pipeline in this way we can update the image version on every master pushes.

base/kustomization.yaml

Next create a new directory called production under the overlays folder and add another kustomization.yaml file within it. This will be our environment.
We won’t use any patches right now but we add a common label for every manifests and link the base kustomization in this file.

overlays/production/kustomization.yaml

Now we can push it into the second repository.

After the repository was pushed go back to the pipeline file and define the last job.

First, it checks out the application project’s master branch and get the actual version from the gradle.properties file. We set it as an environment variable. GitHub Action holds these in the $GITHUB_ENV variable.

In the following step, it checks out the second project. The pipeline needs our personal access token as default the Actions have permission only for the current repository. I’m gonna show how you can register in the next section.

In the last step, we use kustomize command for image version update and then commit and push it back to the repository.

After push (feat/fix) and the pipeline has finished the project version has to be set in both projects.

All steps are done

We can generate PAT under the profile settings (not project settings). Select the Developer Settings from the menu and click on the Personal Access Tokens. You can generate a new token using the button.

As we need to push to this repository from the pipeline select the repo checkbox and add a note.

Click on the generate button on the bottom of the page.
Copy and save the token somewhere as you cannot retrieve it again. Then create a new project secret as we did before using this value and the name has to be the same we used in the pipeline step (PAT).

Setup ArgoCD

I suppose you has an installed k8s cluster (Docker Desktop with enabled Kubernetes is the easiest way to achieve this)

For installing ArgoCD let’s run the following commands. The first one crates a new namespace the second one deploys Argo.

kubectl create namespace argocdkubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

I’m not gonna create Ingress controller now so just use the following command to forward Argo UI from the cluster to your local machine.

kubectl port-forward svc/argocd-server -n argocd 8011:443

Go to this URL and login. The default username is admin.
To retrieve the default password use the following command in the terminal:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Before we create the application we have to add our repository to Argo. Let’s click on the settings (cog) button and choose the repository menu item.
I’m gonna select the connect repo using ssh option but you can login to the repo through https or app.

Add the name, repository url and your private ssh key then click on the connect button.

Connect Repository

Go back to the home page and let’s click on the new app button.

Fill in the application name field, you can use any string there. The project is the default.

Select automatic as sync policy and add the git repository’s URL. The path is overlays/production.

Next, we have to set the cluster URL which is the local cluster’s address. In the case of Docker Desktop, it is http://kubernetes.default.svc and we are gonna deploy the application into the default namespace.

Click on the create button.

On the home page, you will see the application. Wait until it will be healthy.

When the application is up and running open this URL and check the result.

Clicking on the Application’s title we can check the state of the deployment.

Application’s state

Now let’s change the names in the application project and push it back to master branch. After the build has been finished ArgoCD will pick up the changes from the repository and deploy a new application version.

Sometimes it takes up to 5 minutes depending on the build time so be patient (the default sync period is 3 minutes).

Deploy new version automatically

There are many steps we can add to this pipeline like static code analysis, uploading test coverage to Codecov, etc.

The source code is available on GitHub:

Leave a Comment