Technical

Debugging code within Docker container

featured

Docker everywhere

At Adimian, Docker really changed the way we deploy applications. We’ve been playing with it since the beta, and using it in production since version 1.0.

By the end of 2015, we plan to have converted all our deployments using containers, migrating applications to Docker if they were not using it from the begining.

However, my fellow colleagues complain sometimes that Docker does not offer you the level of flexibility a code checkout + supervisor gives you, mostly in terms of debugging production code.

Some people take full snapshots of the containers running in production and spawn them locally to reproduce the bugs, but this does not come for free, as you need to copy a lot of data between the customer machines and your development laptop.

Docker exec to the rescue !

Say I’m running a container called kabuto. It’s a regular flask-based application, started using gunicorn:

In my Dockerfile:

...
CMD gunicorn -w $WORKERS -b $HOST:$PORT kabuto.api:app

Now I run this container:

7cffffa2f91e kabuto "/bin/sh -c 'gunicor 13 minutes ago Up 13 minutes demo_kabuto_1

Alas, there seem to be a bug in my code, triggered when a user logs in the application. I have my user on the phone, and we’re trying to reproduce the problem.

I then have several options:

  1. I have previously installed SSH on my container, and went through the effort of having it started using runit or supervisor. I can then ssh into the container
  2. I copy the whole environment locally, ask how to reproduce to my user, and then hang up the phone to work offline
  3. I use docker exec
12:37:37-eric@monarch:demo$ docker exec -u root -it b1f027b1baba /bin/bash

root@b1f027b1baba:/source# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
www-data 1 0.0 0.0 4448 776 ? Ss 10:24 0:00 /bin/sh -c gunicorn -w $WORKERS -b $HOST:$PORT kabuto:app
www-data 8 0.0 0.9 58792 19404 ? S 10:24 0:00 /usr/bin/python3 /usr/local/bin/gunicorn -w 1 -b 0.0.0.0:5000 kabuto:app
www-data 206 0.5 1.9 238328 39184 ? Sl 10:26 0:07 /usr/bin/python3 /usr/local/bin/gunicorn -w 1 -b 0.0.0.0:5000 kabuto:app
root 210 0.0 0.1 18204 3336 ? Ss 10:46 0:00 /bin/bash
root 243 0.0 0.1 15572 2148 ? R+ 10:49 0:00 ps aux

root@b1f027b1baba:/source# export TERM=xterm && apt-get update && apt-get install -y vim

Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg

root@b1f027b1baba:/source# vim kabuto/api.py
… add some prints, logging, …

 

Ok, great, I made changes to my code, but now how do I make the container use the new code ?

As I’m running gunicorn as PID 1, there is no way to restart this process without having Docker to consider the process is terminated and therefore terminate the whole container.

  1. I could wrap gunicorn into a service and use runit or supervisor as PID 1, allowing me to restart my service within my container
  2. gunicorn also supports kill -HUP  to reload gunicorn. Unfortunately, it does not seem to work in PID 1, so I had to loop each gunicorn worker to reload its threads
  3. Since version 19.0, gunicorn supports the –reload flag, allowing you to make changes to the code on the fly, and have gunicorn reload it when the mtime of a file has changed

This still requires me to change the CMD of my Dockerfile to, so it’s not suitable for instant debugging, but if all my images use this flag by default, I’m future-proof :

CMD gunicorn --reload -w $WORKERS -b $HOST:$PORT kabuto.api:app

So now, no more excuses not to use Docker everywhere !