Docker is a containerization tool used for spinning up isolated, reproducible application enviroments. This piece details how to containerize a Django Project, Postgres, and Redis for local development along with delivering the stack to the cloud via Docker Compose and Docker Machine
In the end, the stack will include a separate container for each service:
- 1 web/Django container
- 1 nginx container
- 1 Postgres container
- 1 Redis container
- 1 data container
Updates:
- 11/10/2017: Added named data volumes to the Postgres and Redis containers.
- 11/13/2017: Added Docker Toolbox, and also updated to the latest versions of Docker - Docker client(17.09.0-ce, build afdb6d4), Docker composer (v1.16.1, build 6d1ac21), Docker Machine(v0.12.2, build 9371605)
Local Setup
Along with Docker(v17.09.0) we will be using -
- Docker Compose for orchestrating a multi-container application into a single app, and
- Docker Machine for creating Docker hosts both locally and in the cloud.
If you're running either Mac OS X or Windows, then download and install the Docker Toolbox to get all the necessary tools. Otherwise follow the directions here and here to install Docker Compose and Machine, respectively.
Once done, test out the installs:
$ docker-machine version
docker-machine version 0.12.2, build 9371605
$ docker-compose version
docker-compose version 1.16.1, build 6d1ac21
docker-py version: 2.5.1
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2j 26 Sep 2016
Next clone the project from the repository or create your own project based on the project structure found on the repo:
βββ docker-compose.yml
βββ nginx
β βββ Dockerfile
β βββ sites-enabled
β βββ django_project
βββ production.yml
βββ web
βββ Dockerfile
βββ django_docker
β βββ __init__.py
β βββ apps
β β βββ __init__.py
β β βββ todo
β β βββ __init__.py
β β βββ admin.py
β β βββ models.py
β β βββ templates
β β β βββ _base.html
β β β βββ home.html
β β βββ tests.py
β β βββ urls.py
β β βββ views.py
β βββ settings.py
β βββ urls.py
β βββ wsgi.py
βββ manage.py
βββ requirements.txt
βββ static
βββ main.css
We're now ready to get the containers up and runnning...
Docker Machine
To start Docker Machine, simply navigate to the project root and then run:
$ docker-mahcine create - d virtualbox dev;
Running pre-create checks...
Creating machine...
(dev) Creating VirtualBox VM...
(dev) Creating SSH key...
(dev) Starting the VM...
(dev) Check network to re-create if needed...
(dev) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine
running on this virtual machine, run: docker-machine env dev
The create
command set up a new "Machine" (called dev) for Docker development. In essence, it started a VM with the Docker client running. Now just point at the dev machine:
$ eval $(docker-machine env dev)
Run the following command to view the currently running Machines:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
dev * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
Next, let's fire up the container with Docker Compose and get Django, Postgres, and Redis up and running.
Docker Compose
Let's take a look at the docker-compose.yml file:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn django_docker.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:lastest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
regis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
Here, we're defining four services - web, nginx, postgres, and redis.
- First, the web service is built via the instructions in the Dockerfile within the "web" directory
- Where the Python environment is setup, requirements are installed, and Django applications is fired up on port 8000. That port is then fowarded to port 80 on the host environment - e.g., the Docker Machine. This service also adds environment variables to the container that are defined in the .env file.
- The nginx service is used for reverse proxy to proxy to foward requests either to Django or the static file directory.
- Next, the postgres service is built from the official PostgresSQL image from Docker Hub, which installs Postgres and runs the server on the default port 5432. Did you notice the data volume? This helps ensure that the data persists even if the Postgres container is deleted.
- Likewise, the redis service uses the official Redis image to install Redis and then the service is ran on port 6379.
Now, to get the containers running, build the images and then start the services:
$ docker-compose build
$ docker-compose up -d
This will take a while the first time you run it. Subsequent builds run much quicker since Docker caches the results from the first build.
Once the services are running, we need to create the database migrations:
$ docker-compose run web /usr/local/bin/python manage.py migrate
Grab the IP associated with Docker Machine -docker-machine ip dev
- and then navigate to that IP in your browser:
Try refreshing. You should see the counter update. Essentially, we're using the Redis INCR to increment after each handled request. Check out the code in web/django_docker/apps/todo/views.py for more info.
Again, this created four services, all running in different containers:
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------
dockerizingdjango_nginx_1 /usr/sbin/nginx Up 0.0.0.0:80->80/tcp
dockerizingdjango_postgres_1 /docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
dockerizingdjango_redis_1 /entrypoint.sh redis-server Up 0.0.0.0:6379->6379/tcp
dockerizingdjango_web_1 /usr/local/bin/gunicorn do ... Up 8000/tcp
To see which environment variables are available on the web service, run:
$ docker-compose run web env
To view the logs:
$ docker-compose logs
You can also enter the Postgres Shell - since we fowarded the port to this host environment in the docker-compose.yml file - to add users/roles as well as databases via:
$ psql - h 192.168.99.100 -p 5342 -U postgres --password
Ready to deploy? Stop the processes via docker-compose stop
and let's get the app up in the cloud!
Deployment
So, with our app running locally, we can now push this exact same environment to a cloud hosting provider with Docker Machine. Let's deploy to a Digital Ocean box.
After you sign up for Digital Ocean, generate a Personal Access Token, and then run the following command:
$ docker-machine create \
-d digitalocean \
--digitalocean-access-token=ADD_YOUR_TOKEN_HERE \
production
This will take a few minutes to provision the droplet and setup a new Docker Machine called production:
Running pre-create checks...
Creating machine...
(production) Creating SSH key...
(production) Creating Digital Ocean droplet...
(production) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect Docker to this machine, run: docker-machine env production
Now we have two machines urnnin, one locally and one on Digital Ocean:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
dev * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
production - digitalocean Running tcp://45.55.35.188:2376 v1.10.3
Set production as the active machine and load the Docker environment into the shell:
$ eval "$(docker-machine env production)"
Finally, let's build the Django app again in the cloud. This time we need to use a slightly different Docker Compose file that does not mount a volume in the container. Why? Well, the volume is perfect for local development since we can update our local code in the "web" directory and the changes will immediately take affect in the container. In production, there's no need for this, obviously.
$ docker-compose build
$ docker-compose -f production.yml up -d
$ docker-compose run web /usr/local/bin/python manager.py migrate
Did you notice how we specified a different config file for production? what if you wanted to also run collectstatic? See this issue.
Grab the IP address associated with that Digital Ocean account and view it in the browser. If all went well, you should see your app running, as it should.
Conclusion
- Grab the code from the repo (star it too).
- Need a Chanllenge? Try using extends to clean up the repetitive code in the two Docker Compose configuration files. Keep it DRY!
- Have a great day!
Thank you RealPython for this tutorial. This is a rewrite.
Grab the orignal @ Github