Should You Dockerize Your App?

This article written for completing PPL 2021 Individual Review competencies

Docker is one of the famous tool for deploying an app. Docker easily isolate your app environment such as library, package, and any other dependencies needed to run and it provides portability to run your app in any Linux machine regardless on any settings. Although of its portability and its ease to use, I choose not to use docker for my software project.

What is Docker?

VM simulate a working computer system, every VM has its own CPU, memory, storage, Operating System (OS), and any other component for a computer system to work. Because of it, app that runs with VM can’t share its resource to other apps with instance and its host engine, although there are unused resource in the VM.

Docker create a container for each app. Container is a loosely isolated environment that contains dependencies needed for an app to run. Container only isolated its environment by process level, resource that used to run container shared with its Host OS. Because of this, Docker minimized locked unused resource in the system.

How Docker Works

Docker client-server architecture. Docker client works as client and Docker daemon works as server in this architecture. Docker client send a docker command which the docker daemon will run.

There 3 main docker command such as Build, Pull, and Run. Build is a docker command to build docker image, docker image is a template to create a docker container. Pull is a docker command to pull a docker image from a docker registry. Run is a command to create a docker container from docker image and run the docker container, although your machine doesn’t have the image that requested, the command simply pull the image from the docker registry.

Docker Implementation Example

Building Image

# pull official base image
FROM python:3.8.3-alpine

# create directory for the app user
RUN mkdir -p /home/app

# create the app user
RUN addgroup -S app && adduser -S app -G app

# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME/staticfiles
RUN mkdir $APP_HOME/mediafiles

# install dependencies
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev gcc python3-dev musl-dev \
&& apk del build-deps \
&& apk --no-cache add musl-dev linux-headers g++
RUN pip install --upgrade pip

# copy

# copy project
RUN pip install -r requirements.txt
# chown all the files to the app user
RUN chown -R app:app $APP_HOME

# change to the app user
USER app

# run
ENTRYPOINT ["/home/app/web/"]

The DockerFile above going to build a Django app that could store staticfiles and mediafiles and aldo communicate with Postgresql databases. The file is an executable to check the database that used for this django app is ready.

Example of file.

if [ "$DATABASE" = "postgres" ]
echo "Waiting for postgres..."

while ! nc -z $DB_HOST $DB_PORT; do
sleep 0.1

echo "PostgreSQL started"
python migrateexec "$@"

We’re still not done yet, Our app use Nginx as a proxy server and static and media file server. Why do we need Nginx for this app? because django isn’t optimized to serve static and media file server, Nginx does. So we need an Nginx container for our app to run, here are the DockerFile for the nginx.

FROM nginx:1.19.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

The Nginx configuration that I will used if i use docker is probably like this.

upstream bisago_be {
server web:8000;

server {

listen 80;

location / {
proxy_pass http://bisago_be;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;

location /staticfiles/ {
alias /home/app/web/staticfiles/;

location /mediafiles/ {
alias /home/app/web/mediafiles/;


Now all of the DockerFile needed are ready, we’re ready to create containers for the app to run. But there are a problem when creating containers, how are we going to manage all of these containers and integrate them into one network?


For our app we’re going to create 3 service and each service have one container running. The services are web for django app, nginx for nginx service, and db for postgresql.

As you can see from the script above, to build the custom image with docker-compose we could use build tag to provide details of DockerFile path for the service. Ports tag configure port forwarding from Host Port to docker container port. Expose tag configure exposed ports in docker container to another container in container network. Command tag to configure what command do container run after building phase.

Volumes is way to persisting data that generated by docker container. Why we need volumes? because of docker stateless properties it couldn’t persist any data generated, it will be gone when the docker container shut down. To create volumes using docker-compose, add volumes tag to service and specified the name of the volume and its folder path in your docker container.

Build and Run

docker-compose build

To run the container, use command below (assuming the file path in command prompt contains docker-compose.yml).

docker-compose up -d

From the docker-compose.yml configuration, it should be run on http://localhost:80/.

Why I didn’t use docker for my Project?

Python have great dependencies isolation.

Python have great tool to isolate dependencies for certain project. It is called virtual environment, you could create a environment specified for your app project and install dependencies that you need in the virtual environment without worrying any dependencies conflict in your global environment.

The project stored valuable data.

Because of docker stateless properties, we need to store databases record in docker volumes. This might become a quite problem because you need to a daily backup of the database record and it is harder than using a local database server that could cause extra time in project maintenance. Database is an critical service so we need to remove the unnecessary risks.

There is an alternative to keep using docker and connect it to local database server. you could allow your local database server to connect to any IP addresses and restrict it the database for your project only to connect to docker bridge addresses (click this link on how to do it), but it might raise a security concern because we allowing any IP addresses to connect to local database server.

For the static and media file, it has similar reason to database problem, it is easier to backup media and static files from non-docker app rather than docker app.


I hope this article will help or give some insight in your next work or project.


Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store