Build a Todo App Using a Microservices Architecture and Use Auth Service to Protect Its Routes | by Alessandro Mangone | Jun, 2022

Use auth service to verify the JWT token and protect your other microservices (Lab01 — part 2)

Photo by King’s Church International on Unsplash

In part one I showed how to build an auth microservices using Django, protecting his routes with a decorator to verify if the user is authorized to interact with his views. If you missed it, take a look here.

Now it’s finally time to move on with the implementation of our to-do app development.

As anticipated in the first part, here I will show how I developed the todo microservice and how I used the auth service to protect its routes.

Lastly, I will show how I connected all microservices using docker-compose.

You can find the Github project here.

The articles are organized as follows:

Part 1:

  • The big picture
  • Auth service

Part 2:

  • To-do service
  • Run all services with docker-compose

Follow me along and let’s dive in for this final part!

sketch made with excalibur that shows how the auth service tell us if the current user is authorized (200) to use the to-do service, or not (401)
Sketch made with Excalidraw

As the sketch explains above, we want to use the auth service to tell us if the current user is authorized (200) to use the to-do service, or not (401).

To do so, we want each request to be sent with the Authorization field in the header, with the Bearer token generated by auth service. Otherwise, the request will be rejected with a 401 status code.

The service was implemented with Flask, a web framework written in Python that let you develop web applications easily.

Let’s have a look at the project structure.

to-do service project structure, list of folders
to-do service project structure

As with the authorization service, I created bash scripts and a Makefile to automate tedious processes.
You can also use them to speed up the whole process and be ready to play.

.flaskenv lets you define a series of parameters that will help you when you are running the application.

Additionally, it allows you to categorize the type of environment between development, testing, and production. Each type will trigger a different setup of the application based on the src/config.py file as I will show you further.

In my case, I’m using SQLite DB when in the development environment and switching to PostgreSQL DB when in production mode with docker.

Flask environment variables

The python file runner.py will be launched when you start the application with the command flask run (it will check which file executes based on the FLASK_APP parameter) or python runner.py .

Now, let’s see inside the src folder and explain the most interesting files.

Let’s have look at what is inside the src folder. Here, there are the core logics and interactions that describe our to-do service.

Files and folders inside the src folder

app.py — It contains all configurations, all extensions, and all routes registered inside the app.

Despite all the other examples of app.py you can find, I think this structure is more flexible and organized. Everything has its place whether it is an extension or a new route.

As you will see, the same approach is to keep separate core actions (service) from routes definition (blueprint). This lets me keep all routes small, clean and maintainable.

If you are interested, I will show you in detail how I usually structure my flask project, let me know in the comment section.

extensions.py

In this file, I keep all the necessary extensions that I need in my flask app. In this way, I can easily import DB in my blueprints or service files, for example, without using the app.py. I believe it’s easier and clearer.

config.py

Here, we can find the configurations I discussed before. They are dynamically called based on the environment.

As we can see, when we run the app under the development environment, a local todo.db will be born. Same for testing. A different situation happens when the app will run under the production environment (our docker container in this case), where a connection with a PostgreSQL database will be established.

decorators.py

The decorators.py file is where the magic happens!

As for the auth service, each request will be protected using the decorator @is_authorized that will be executed before the current request. It will check if Authorization field is in the header request.

If so, an Auth class, that communicates with auth service, will verify that the user is authorized based on the Bearer token passed. Otherwise, a 401 status code will be returned, and no action will be executed.

If we try to call an API without passing the Bearer token or an invalid one, the auth service will decline the request as unauthorized.

curl --location --request GET 'http://localhost:5000/api/v1/todo/?completed=false'# curl GET 'http://localhost:5000/api/v1/todo/?completed=false'                      
--header 'Authorization: Bearer mynotvalidtoken'
<!doctype html>
<html lang=en>
<title>401 Unauthorized</title>
<h1>Unauthorized</h1>
<p>The server could not verify that you are authorized to access the URL requested. You either supplied the wrong credentials (e.g. a bad password), or your browser doesn&#x27;t understand how to supply the credentials required.</p>

So let’s what Auth class inside service/auth_service.py looks like.

The Auth class wraps the necessary APIs available from the auth service to verify the token passed and returns the user_id of the user that sends the request.

the __init__.py maps the hostname of the auth service and the port to allow the todo service to communicate internally with each other when they are running in a docker container.

def __init__(self):                                         self.AUTH_SERVICE_NAME = os.getenv('AUTH_SERVICE_NAME', 'localhost')                                     self.AUTH_SERVICE_PORT = os.getenv('AUTH_SERVICE_PORT', '8000')

The method get_user() allows us to retrieve the user_id based on the Bearer token inside the request. The user_id will be used as a field in our model to map the user and his to-dos.

blueprints/todo.py

Here, we can find all the possible interactions we can do with the to-do service:

  • Get all user’s todos, filtered by complete status eventually
  • Create a new task
  • Set a task as completed
  • Update a new task text
  • Delete a task

All routes are protected by @is_authorized decorator, and for each request the current user is retrieved before executing any action.

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

That’s how I defined the Dockerfile:

Same as for auth service in the part 1the base image takes as entry point the script entrypoint.sh that is executed as first.

When you start the docker container, it will check that the connection with PostgreSQL DB is established before moving on with other instructions.

Then the base image will be used to generate the dev image. When the container will run the command python runner.py will be executed and the flask app will run under 0.0.0.0:5000.

Now it’s time to launch the service! I prepared a Makefile to automate the boring stuff. If you have downloaded the code you can simply type the command make <target> and enjoy!

docker-build: it will create a docker image with the name msalab01/todo:v1.

docker-run: it will start the docker container named lab01_todo. If you need to use a different port, you have to specify in the make command with the variable PORT, otherwise, 5000 will be used.

local-build: It will launch bootstrap.shthat will automate all the initial setup, like creating a virtual environment and installing dependencies.

local-run: it will launch start_local_server.sh, a bash script that executes all basic instructions to run the server locally.

Note: If you decide to run the services using the docker container independently, you will not be able to establish a communication between services, unless you put them under the same network. My suggestion if you want to test them before moving inside a docker-compose, is to run both locally.

Finally, it’s time to put everything together!

To run all services together and let them communicate with each other, we need to write down a docker-compose file.

Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.

Let’s see what our file looks like.

Databases: Each of them has its volume and its network to keep it isolated and only services that share the same network can communicate with it.

auth service: it gets the image mslab01/auth:v1 previously built. Additionally, since we are using a PostgreSQL DB, we set POSTGRES_ENABLED=1 and a connection between the DB will be established by passing the same attributes used for creating the auth_db. This is possible because auth and auth_db share the same network backend_auth_db. Furthermore, to allow communication with the frontend service we need to put the service under the frontend network as well.

todo service: it gets the image mslab01/todo:v1 previously built. The same story is valid for this service and its database (todo_db). In addition, we passed AUTH_SERVICE_NAME and AUTH_SERVICE_PORT of the auth service to allow internal communication between containers through the container ports. In this case, auth and todo services share the same frontend network.

webapp: it takes the web app built in Vue3. I will not show how it was created because it is out of scope, if you are interested you can see the code on Github. You can choose your favorite framework and call the APIs using Axios, Fetch, or Ajax.

This is how networks inside the docker-compose.yml are shared — Sketch made by me with Excalidraw

Ready… Go!

The time we’ve been waiting for has finally come.

To start the service, in your terminal under the root project (or where you create your final docker-compose.yml) type the command:

docker-compose up -d

Go to localhost or localhost:80create a new account if you have not already done, and enjoy your success!

TO-DO app homepage after you logged in

Congrats! Now you know how to build a to-do application using a microservices architecture strategy. We also saw how to protect different services through authorization service to be sure that only authorized users can interact with the to-do creation service.

I am glad I’ve shared this experiment with you and I hope you found it useful for your next project!

Hope to see you at the next workshop!
Follow me to make sure you don’t miss anything.

See ya!

Leave a Comment