Merge pull request #284 from jayfk/master

Enhanced Docker Support using docker-compose
This commit is contained in:
Daniel Greenfeld 2015-08-18 09:50:21 -07:00
commit 8853abc2f5
10 changed files with 330 additions and 1 deletions

View File

@ -37,6 +37,7 @@ Features
* Pre configured Celery_ (optional) * Pre configured Celery_ (optional)
* Integration with Maildump_ for local email testing (optional) * Integration with Maildump_ for local email testing (optional)
* Integration with Sentry_ for error logging (optional) * Integration with Sentry_ for error logging (optional)
* Docker support using docker-compose_ for dev and prod
.. _Hitch: https://github.com/hitchtest/hitchtest .. _Hitch: https://github.com/hitchtest/hitchtest
.. _Bootstrap: https://github.com/twbs/bootstrap .. _Bootstrap: https://github.com/twbs/bootstrap
@ -51,6 +52,7 @@ Features
.. _Celery: http://www.celeryproject.org/ .. _Celery: http://www.celeryproject.org/
.. _Maildump: https://github.com/ThiefMaster/maildump .. _Maildump: https://github.com/ThiefMaster/maildump
.. _Sentry: https://getsentry.com .. _Sentry: https://getsentry.com
.. _docker-compose: https://www.github.com/docker/compose
Constraints Constraints
@ -166,6 +168,50 @@ To get live reloading to work you'll probably need to install an `appropriate br
It's time to write the code!!! It's time to write the code!!!
Getting up and running using docker
----------------------------------
The steps below will get you up and running with a local development environment. We assume you have the following installed:
* docker
* docker-compose
Open a terminal at the project root and run the following for local development::
$ docker-compose -f dev.yml up
You can also set the environment variable ``COMPOSE_FILE`` pointing to ``dev.yml`` like this::
$ export COMPOSE_FILE=dev.yml
And then run::
$ docker-compose up
To migrate your app and to create a superuser, run::
$ docker-compose run django python manage.py migrate
$ docker-compose run django python manage.py createsuperuser
If you are using `boot2docker` to develop on OS X or Windows, you need to create a `/data` partition inside your boot2docker
vm to make all changes persistent. If you don't do that your `/data` directory will get wiped out on every reboot.
To create a persistent folder, log into the `boot2docker` vm by running::
$ bootdocker ssh
And then::
$ sudo su
$ echo 'ln -sfn /mnt/sda1/data /data' >> /var/lib/boot2docker/bootlocal.sh
In case you are wondering why you can't use a host volume to keep the files on your mac: As of `boot2docker` 1.7 you'll
run into permission problems with mounted host volumes if the container creates his own user and `chown`s the directories
on the volume. Postgres is doing that, so we need this quick fix to ensure that all development data persists.
For Readers of Two Scoops of Django 1.8 For Readers of Two Scoops of Django 1.8
-------------------------------------------- --------------------------------------------

View File

@ -0,0 +1,23 @@
FROM python:2.7
ENV PYTHONUNBUFFERED 1
# Requirements have to be pulled and installed here, otherwise caching won't work
ADD ./requirements /requirements
ADD ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN pip install -r /requirements/local.txt
RUN groupadd -r django && useradd -r -g django django
ADD . /app
RUN chown -R django /app
ADD ./compose/django/gunicorn.sh /gunicorn.sh
ADD ./compose/django/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh && chown django /entrypoint.sh
RUN chmod +x /gunicorn.sh && chown django /gunicorn.sh
WORKDIR /app
ENTRYPOINT ["/entrypoint.sh"]

View File

@ -200,7 +200,7 @@ The testing framework runs Django, Celery (if enabled), Postgres, HitchSMTP (a m
Deployment Deployment
---------- ----------
It is possible to deploy to Heroku or to your own server by using Dokku, an open source Heroku clone. It is possible to deploy to Heroku, to your own server by using Dokku, an open source Heroku clone or using docker-compose.
Heroku Heroku
^^^^^^ ^^^^^^
@ -277,3 +277,116 @@ You can then deploy by running the following commands.
ssh -t dokku@yourservername.com dokku run {{cookiecutter.repo_name}} python manage.py createsuperuser ssh -t dokku@yourservername.com dokku run {{cookiecutter.repo_name}} python manage.py createsuperuser
When deploying via Dokku make sure you backup your database in some fashion as it is NOT done automatically. When deploying via Dokku make sure you backup your database in some fashion as it is NOT done automatically.
Docker
^^^^^^
**Warning**
Docker is evolving extremely fast, but it has still some rough edges here and there. Compose is currently (as of version 1.4)
not considered production ready. That means you won't be able to scale to multiple servers and you won't be able to run
zero downtime deployments out of the box. Consider all this as experimental until you understand all the implications
to run docker (with compose) on production.
**Run your app with docker-compose**
Prerequisites:
* docker (tested with 1.8)
* docker-compose (tested with 0.4)
Before you start, check out the `docker-compose.yml` file in the root of this project. This is where each component
of this application gets its configuration from. It consists of a `postgres` service that runs the database, `redis`
for caching, `nginx` as reverse proxy and last but not least the `django` application run by gunicorn.
{% if cookiecutter.use_celery == 'y' -%}
Since this application also runs Celery, there are two more services with a service called `celeryworker` that runs the
celery worker process and `celerybeat` that runs the celery beat process.
{% endif %}
All of these servicese except `redis` rely on environment variables set by you. There is an `env.example` file in the
root directory of this project as a starting point. Add your own variables to the file and rename it to `.env`. This
file won't be tracked by git by default so you'll have to make sure to use some other mechanism to copy your secret if
you are relying solely on git.
By default, the application is configured to listen on all interfaces on port 80. If you want to change that, open the
`docker-compose.yml` file and replace `0.0.0.0` with your own ip. If you are using `nginx-proxy`_ to run multiple
application stacks on one host, remove the port setting entirely and add `VIRTUAL_HOST={{cookiecutter.domain_name}}` to your env file.
This pass all incoming requests on `nginx-proxy` to the nginx service your application is using.
.. _nginx-proxy: https://github.com/jwilder/nginx-proxy
Postgres is saving its database files to `/data/{{cookiecutter.repo_name}}/postgres` by default. Change that if you wan't
something else and make sure to make backups since this is not done automatically.
To get started, pull your code from source control (don't forget the `.env` file) and change to your projects root
directory.
You'll need to build the stack first. To do that, run::
docker-compose build
Once this is ready, you can run it with::
docker-compose up
To run a migration, open up a second terminal and run::
docker-compose run django python manage.py migrate
To create a superuser, run::
docker-compose run django python manage.py createsuperuser
If you need a shell, run::
docker-compose run django python manage.py shell_plus
Once you are ready with your initial setup, you wan't to make sure that your application is run by a process manager to
survive reboots and auto restarts in case of an error. You can use the process manager you are most familiar with. All
it needs to do is to run `docker-compose up` in your projects root directory.
If you are using `supervisor`, you can use this file as a starting point::
[program:{{cookiecutter.repo_name}}]
command=docker-compose up
directory=/path/to/{{cookiecutter.repo_name}}
redirect_stderr=true
autostart=true
autorestart=true
priority=10
Place it in `/etc/supervisor/conf.d/{{cookiecutter.repo_name}}.conf` and run::
supervisorctl reread
supervisorctl start {{cookiecutter.repo_name}}
To get the status, run::
supervisorctl status
If you have errors, you can always check your stack with `docker-compose`. Switch to your projects root directory and run::
docker-compose ps
to get an output of all running containers.
To check your logs, run::
docker-compose logs
If you want to scale your application, run::
docker-compose scale django=4
docker-compose scale celeryworker=2
**Don't run the scale command on postgres or celerybeat**

View File

@ -0,0 +1,18 @@
#!/bin/bash
set -e
# This entrypoint is used to play nicely with the current cookiecutter configuration.
# Since docker-compose relies heavily on environment variables itself for configuration, we'd have to define multiple
# environment variables just to support cookiecutter out of the box. That makes no sense, so this little entrypoint
# does all this for us.
export DJANGO_CACHE_URL=redis://redis:6379/0
# the official postgres image uses 'postgres' as default user if not set explictly.
if [ -z "$POSTGRES_ENV_POSTGRES_USER" ]; then
export POSTGRES_ENV_POSTGRES_USER=postgres
fi
export DATABASE_URL=postgres://$POSTGRES_ENV_POSTGRES_USER:$POSTGRES_ENV_POSTGRES_PASSWORD@postgres:5432/$POSTGRES_ENV_POSTGRES_USER
{% if cookiecutter.use_celery == 'y' %}
export CELERY_BROKER_URL=$DJANGO_CACHE_URL
{% endif %}
exec "$@"

View File

@ -0,0 +1,3 @@
#!/bin/sh
python /app/manage.py collectstatic --noinput
/usr/local/bin/gunicorn config.wsgi -w 4 -b 0.0.0.0:5000 --chdir=/app

View File

@ -0,0 +1,2 @@
FROM nginx:latest
ADD nginx.conf /etc/nginx/nginx.conf

View File

@ -0,0 +1,53 @@
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream app {
server django:5000;
}
server {
listen 80;
charset utf-8;
location / {
# checks for static file, if not found proxy to app
try_files $uri @proxy_to_app;
}
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
}

View File

@ -0,0 +1,16 @@
postgres:
image: postgres
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 is fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/{{cookiecutter.repo_name}}/postgres:/var/lib/postgresql/data
django:
build: .
command: python /app/manage.py runserver_plus 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgres

View File

@ -0,0 +1,43 @@
postgres:
image: postgres:9.4
volumes:
- /data/{{cookiecutter.repo_name}}/postgres:/var/lib/postgresql/data
env_file: .env
django:
build: .
user: django
links:
- postgres
- redis
command: /gunicorn.sh
env_file: .env
nginx:
build: ./compose/nginx
links:
- django
ports:
- "0.0.0.0:80:80"
redis:
image: redis:3.0
{% if cookiecutter.use_celery == 'y' %}
celeryworker:
build: .
user: django
env_file: .env
links:
- postgres
- redis
command: celery -A {{cookiecutter.repo_name}}.taskapp worker -l INFO
celerybeat:
build: .
user: django
env_file: .env
links:
- postgres
- redis
command: celery -A {{cookiecutter.repo_name}}.taskapp beat -l INFO
{% endif %}

View File

@ -0,0 +1,12 @@
POSTGRES_PASSWORD=mysecretpass
POSTGRES_USER=postgresuser
DJANGO_SETTINGS_MODULE=config.settings.production
DJANGO_SECRET_KEY=
DJANGO_AWS_ACCESS_KEY_ID=
DJANGO_AWS_SECRET_ACCESS_KEY=
DJANGO_AWS_STORAGE_BUCKET_NAME=
DJANGO_MAILGUN_API_KEY=
DJANGO_MAILGUN_SERVER_NAME=
DJANGO_SERVER_EMAIL=
DJANGO_SECURE_SSL_REDIRECT=False