refactoring + added documentation

- removed env.production and added a env.example that should be renamed to `.env` (not tracked by default)
- Refactored docker-compose.yml
    * adding user to django, celeryworker, celerybeat so that we got rid of the `su` hack
    * removed rabbitmq
- Refactored Dockerfile
- Refactored `entrypoint.sh` and added inline documentation
- Removed `su` hack from gunicorn.sh
- Added documentation
This commit is contained in:
Jay 2015-08-18 17:50:20 +02:00
parent 27a2ed46be
commit c7ea475f06
7 changed files with 144 additions and 36 deletions

View File

@ -196,6 +196,22 @@ To migrate your app and to create a superuser, run::
$ docker-compose run django python manage.py createsuperuser
If you are using `boot2docker` to develop on OS X or Windows, you need to create a `/data` partition inside your boot2docker
vm to make all changes persistent. If you don't do that your `/data` directory will get wiped out on every reboot.
To create a persistent folder, log into the `boot2docker` vm by running::
$ bootdocker ssh
And then::
$ sudo su
$ echo 'ln -sfn /mnt/sda1/data /data' >> /var/lib/boot2docker/bootlocal.sh
In case you are wondering why you can't use a host volume to keep the files on your mac: As of `boot2docker` 1.7 you'll
run into permission problems with mounted host volumes if the container creates his own user and `chown`s the directories
on the volume. Postgres is doing that, so we need this quick fix to ensure that all development data persists.
For Readers of Two Scoops of Django 1.8
--------------------------------------------

View File

@ -1,11 +1,6 @@
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN groupadd -r django && useradd -r -g django django
RUN apt-get update
RUN apt-get -y install libmemcached-dev
# Requirements have to be pulled and installed here, otherwise caching won't work
ADD ./requirements /requirements
ADD ./requirements.txt /requirements.txt
@ -13,15 +8,15 @@ ADD ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN pip install -r /requirements/local.txt
RUN groupadd -r django && useradd -r -g django django
ADD . /app
RUN chown -R django /app
ADD ./compose/django/gunicorn.sh /gunicorn.sh
ADD ./compose/django/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
RUN chmod +x /gunicorn.sh
RUN chown -R django /app
RUN chmod +x /entrypoint.sh && chown django /entrypoint.sh
RUN chmod +x /gunicorn.sh && chown django /gunicorn.sh
WORKDIR /app

View File

@ -281,13 +281,112 @@ When deploying via Dokku make sure you backup your database in some fashion as i
Docker
^^^^^^
You need a working docker and docker-compose installation on your production server.
**Warning**
To get started, clone the git repo containing your projects code and set all needed environment variables in
``env.production``.
Docker is evolving extremely fast, but it has still some rough edges here and there. Compose is currently (as of version 1.4)
not considered production ready. That means you won't be able to scale to multiple servers and you won't be able to run
zero downtime deployments out of the box. Consider all this as experimental until you understand all the implications
to run docker (with compose) on production.
To start docker-compose in the foreground, run:
**Run your app with docker-compose**
.. code-block:: bash
Prerequisites:
* docker (tested with 1.8)
* docker-compose (tested with 0.4)
Before you start, check out the `docker-compose.yml` file in the root of this project. This is where each component
of this application gets its configuration from. It consists of a `postgres` service that runs the database, `redis`
for caching, `nginx` as reverse proxy and last but not least the `django` application run by gunicorn.
{% if cookiecutter.use_celery == 'y' -%}
Since this application also runs Celery, there are two more services with a service called `celeryworker` that runs the
celery worker process and `celerybeat` that runs the celery beat process.
{% endif %}
All of these servicese except `redis` rely on environment variables set by you. There is an `env.example` file in the
root directory of this project as a starting point. Add your own variables to the file and rename it to `.env`. This
file won't be tracked by git by default so you'll have to make sure to use some other mechanism to copy your secret if
you are relying solely on git.
By default, the application is configured to listen on all interfaces on port 80. If you want to change that, open the
`docker-compose.yml` file and replace `0.0.0.0` with your own ip. If you are using `nginx-proxy`_ to run multiple
application stacks on one host, remove the port setting entirely and add `VIRTUAL_HOST={{cookiecutter.domain_name}}` to your env file.
This pass all incoming requests on `nginx-proxy` to the nginx service your application is using.
.. _nginx-proxy: https://github.com/jwilder/nginx-proxy
Postgres is saving its database files to `/data/{{cookiecutter.repo_name}}/postgres` by default. Change that if you wan't
something else and make sure to make backups since this is not done automatically.
To get started, pull your code from source control (don't forget the `.env` file) and change to your projects root
directory.
You'll need to build the stack first. To do that, run::
docker-compose build
Once this is ready, you can run it with::
docker-compose up
To run a migration, open up a second terminal and run::
docker-compose run django python manage.py migrate
To create a superuser, run::
docker-compose run django python manage.py createsuperuser
If you need a shell, run::
docker-compose run django python manage.py shell_plus
Once you are ready with your initial setup, you wan't to make sure that your application is run by a process manager to
survive reboots and auto restarts in case of an error. You can use the process manager you are most familiar with. All
it needs to do is to run `docker-compose up` in your projects root directory.
If you are using `supervisor`, you can use this file as a starting point::
[program:{{cookiecutter.repo_name}}]
command=docker-compose up
directory=/path/to/{{cookiecutter.repo_name}}
redirect_stderr=true
autostart=true
autorestart=true
priority=10
Place it in `/etc/supervisor/conf.d/{{cookiecutter.repo_name}}.conf` and run::
supervisorctl reread
supervisorctl start {{cookiecutter.repo_name}}
To get the status, run::
supervisorctl status
If you have errors, you can always check your stack with `docker-compose`. Switch to your projects root directory and run::
docker-compose ps
to get an output of all running containers.
To check your logs, run::
docker-compose logs
If you want to scale your application, run::
docker-compose scale django=4
docker-compose scale celeryworker=2
**Don't run the scale command on postgres or celerybeat**

View File

@ -1,15 +1,18 @@
#!/bin/bash
set -e
# This entrypoint is used to play nicely with the current cookiecutter configuration.
# Since docker-compose relies heavily on environment variables itself for configuration, we'd have to define multiple
# environment variables just to support cookiecutter out of the box. That makes no sense, so this little entrypoint
# does all this for us.
export DJANGO_CACHE_URL=redis://redis:6379/0
# setting up environment variables to work with DATABASE_URL and DJANGO_CACHE_URL
export DJANGO_CACHE_URL=redis://redis:6379
# the official postgres image uses 'postgres' as default user if not set explictly.
if [ -z "$POSTGRES_ENV_POSTGRES_USER" ]; then
export POSTGRES_ENV_POSTGRES_USER=postgres
fi
export DATABASE_URL=postgres://$POSTGRES_ENV_POSTGRES_USER:$POSTGRES_ENV_POSTGRES_PASSWORD@postgres:5432/$POSTGRES_ENV_POSTGRES_USER
{% if cookiecutter.use_celery %}
export CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672//
{% if cookiecutter.use_celery == 'y' %}
export CELERY_BROKER_URL=$DJANGO_CACHE_URL
{% endif %}
exec "$@"

View File

@ -1,3 +1,3 @@
#!/bin/sh
su -m django -c "python /app/manage.py collectstatic --noinput"
su -m django -c "/usr/local/bin/gunicorn config.wsgi -w 4 -b 0.0.0.0:5000 --chdir=/app"
python /app/manage.py collectstatic --noinput
/usr/local/bin/gunicorn config.wsgi -w 4 -b 0.0.0.0:5000 --chdir=/app

View File

@ -2,18 +2,16 @@ postgres:
image: postgres:9.4
volumes:
- /data/{{cookiecutter.repo_name}}/postgres:/var/lib/postgresql/data
env_file: env.production
env_file: .env
django:
build: .
user: django
links:
- postgres
- redis
{% if cookiecutter.use_celery %}
- rabbitmq
{% endif %}
command: /gunicorn.sh
env_file: env.production
env_file: .env
nginx:
build: ./compose/nginx
@ -24,25 +22,22 @@ nginx:
redis:
image: redis:3.0
{% if cookiecutter.use_celery %}
rabbitmq:
image: rabbitmq
{% if cookiecutter.use_celery == 'y' %}
celeryworker:
build: .
env_file: env.production
user: django
env_file: .env
links:
- rabbitmq
- postgres
- redis
command: su -m django -c "celery -A {{cookiecutter.repo_name}}.taskapp worker -l INFO"
command: celery -A {{cookiecutter.repo_name}}.taskapp worker -l INFO
celerybeat:
build: .
env_file: env.production
user: django
env_file: .env
links:
- rabbitmq
- postgres
- redis
command: su -m django -c "celery -A {{cookiecutter.repo_name}}.taskapp beat -l INFO"
command: celery -A {{cookiecutter.repo_name}}.taskapp beat -l INFO
{% endif %}