Rename docker-compose.yml to production.yml

This commit is contained in:
Nikita P. Shupeyko 2017-07-10 15:12:20 +03:00
parent ab47d8173d
commit 8835e5105b
6 changed files with 30 additions and 30 deletions

View File

@ -12,7 +12,7 @@ Prerequisites
Understand the Compose Setup
--------------------------------
Before you start, check out the `docker-compose.yml` file in the root of this project. This is where each component
Before you start, check out the `production.yml` file in the root of this project. This is where each component
of this application gets its configuration from. Notice how it provides configuration for these services:
* `postgres` service that runs the database
@ -63,7 +63,7 @@ Optional: nginx-proxy Setup
---------------------------
By default, the application is configured to listen on all interfaces on port 80. If you want to change that, open the
`docker-compose.yml` file and replace `0.0.0.0` with your own ip.
`production.yml` file and replace `0.0.0.0` with your own ip.
If you are using `nginx-proxy`_ to run multiple application stacks on one host, remove the port setting entirely and add `VIRTUAL_HOST=example.com` to your env file. Here, replace example.com with the value you entered for `domain_name`.
@ -87,7 +87,7 @@ Replace dhparam.pem.example with a generated dhparams.pem file before running an
$ openssl dhparam -out /path/to/project/compose/nginx/dhparams.pem 2048
If you would like to add additional subdomains to your certificate, you must add additional parameters to the certbot command in the `docker-compose.yml` file:
If you would like to add additional subdomains to your certificate, you must add additional parameters to the certbot command in the `production.yml` file:
Replace:
@ -110,7 +110,7 @@ If you would like to set up autorenewal of your certificates, the following comm
#!/bin/bash
cd <project directory>
docker-compose run --rm --name certbot certbot bash -c "sleep 6 && certbot certonly --standalone -d {{ cookiecutter.domain_name }} --test --agree-tos --email {{ cookiecutter.email }} --server https://acme-v01.api.letsencrypt.org/directory --rsa-key-size 4096 --verbose --keep-until-expiring --preferred-challenges http-01"
docker-compose -f production.yml run --rm --name certbot certbot bash -c "sleep 6 && certbot certonly --standalone -d {{ cookiecutter.domain_name }} --test --agree-tos --email {{ cookiecutter.email }} --server https://acme-v01.api.letsencrypt.org/directory --rsa-key-size 4096 --verbose --keep-until-expiring --preferred-challenges http-01"
docker exec {{ cookiecutter.project_name }}_nginx_1 nginx -s reload
And then set a cronjob by running `crontab -e` and placing in it (period can be adjusted as desired)::
@ -125,40 +125,40 @@ directory.
You'll need to build the stack first. To do that, run::
docker-compose build
docker-compose -f production.yml build
Once this is ready, you can run it with::
docker-compose up
docker-compose -f production.yml up
To run a migration, open up a second terminal and run::
docker-compose run django python manage.py migrate
docker-compose -f production.yml run django python manage.py migrate
To create a superuser, run::
docker-compose run django python manage.py createsuperuser
docker-compose -f production.yml run django python manage.py createsuperuser
If you need a shell, run::
docker-compose run django python manage.py shell
docker-compose -f production.yml run django python manage.py shell
To get an output of all running containers.
To check your logs, run::
docker-compose logs
docker-compose -f production.yml logs
If you want to scale your application, run::
docker-compose scale django=4
docker-compose scale celeryworker=2
docker-compose -f production.yml scale django=4
docker-compose -f production.yml scale celeryworker=2
.. warning:: Don't run the scale command on postgres, celerybeat, certbot, or nginx.
If you have errors, you can always check your stack with `docker-compose`. Switch to your projects root directory and run::
docker-compose ps
docker-compose -f production.yml ps
Supervisor Example
@ -166,12 +166,12 @@ Supervisor Example
Once you are ready with your initial setup, you want to make sure that your application is run by a process manager to
survive reboots and auto restarts in case of an error. You can use the process manager you are most familiar with. All
it needs to do is to run `docker-compose up` in your projects root directory.
it needs to do is to run `docker-compose -f production.yml up` in your projects root directory.
If you are using `supervisor`, you can use this file as a starting point::
[program:{{cookiecutter.project_slug}}]
command=docker-compose up
command=docker-compose -f production.yml up
directory=/path/to/{{cookiecutter.project_slug}}
redirect_stderr=true
autostart=true

View File

@ -39,7 +39,7 @@ on your development system::
$ docker-compose -f local.yml build
If you want to build the production environment you don't have to pass an argument -f, it will automatically use docker-compose.yml.
If you want to build the production environment you don't have to pass an argument -f, it will automatically use production.yml.
Boot the System
---------------
@ -59,13 +59,13 @@ You can also set the environment variable ``COMPOSE_FILE`` pointing to ``local.y
And then run::
$ docker-compose up
$ docker-compose -f production.yml up
Running management commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~
As with any shell command that we wish to run in our container, this is done
using the ``docker-compose run`` command.
using the ``docker-compose -f production.yml run`` command.
To migrate your app and to create a superuser, run::
@ -82,7 +82,7 @@ When ``DEBUG`` is set to `True`, the host is validated against ``['localhost', '
Production Mode
~~~~~~~~~~~~~~~
Instead of using `local.yml`, you would use `docker-compose.yml`.
Instead of using `local.yml`, you would use `production.yml`.
Other Useful Tips
-----------------

View File

@ -130,7 +130,7 @@ def remove_docker_files():
"""
Removes files needed for docker if it isn't going to be used
"""
for filename in ["local.yml", "docker-compose.yml", ".dockerignore"]:
for filename in ["local.yml", "production.yml", ".dockerignore"]:
os.remove(os.path.join(
PROJECT_DIRECTORY, filename
))

View File

@ -17,10 +17,10 @@ export PGPASSWORD=$POSTGRES_PASSWORD
# check that we have an argument for a filename candidate
if [[ $# -eq 0 ]] ; then
echo 'usage:'
echo ' docker-compose run postgres restore <backup-file>'
echo ' docker-compose -f production.yml run postgres restore <backup-file>'
echo ''
echo 'to get a list of available backups, run:'
echo ' docker-compose run postgres list-backups'
echo ' docker-compose -f production.yml run postgres list-backups'
exit 1
fi
@ -31,7 +31,7 @@ BACKUPFILE=/backups/$1
if ! [ -f $BACKUPFILE ]; then
echo "backup file not found"
echo 'to get a list of available backups, run:'
echo ' docker-compose run postgres list-backups'
echo ' docker-compose -f production.yml run postgres list-backups'
exit 1
fi

View File

@ -29,7 +29,7 @@ The Docker compose tool (previously known as `fig`_) makes linking these contain
webserver/
Dockerfile
...
docker-compose.yml
production.yml
Each component of your application would get its own `Dockerfile`_. The rest of this example assumes you are using the `base postgres image`_ for your database. Your database settings in `config/base.py` might then look something like:
@ -48,7 +48,7 @@ Each component of your application would get its own `Dockerfile`_. The rest of
}
}
The `Docker compose documentation`_ explains in detail what you can accomplish in the `docker-compose.yml` file, but an example configuration might look like this:
The `Docker compose documentation`_ explains in detail what you can accomplish in the `production.yml` file, but an example configuration might look like this:
.. _Docker compose documentation: https://docs.docker.com/compose/#compose-documentation
@ -107,9 +107,9 @@ We'll ignore the webserver for now (you'll want to comment that part out while w
# uncomment the line below to use container as a non-root user
USER python:python
Running `sudo docker-compose build` will follow the instructions in your `docker-compose.yml` file and build the database container, then your webapp, before mounting your cookiecutter project files as a volume in the webapp container and linking to the database. Our example yaml file runs in development mode but changing it to production mode is as simple as commenting out the line using `runserver` and uncommenting the line using `gunicorn`.
Running `sudo docker-compose -f production.yml build` will follow the instructions in your `production.yml` file and build the database container, then your webapp, before mounting your cookiecutter project files as a volume in the webapp container and linking to the database. Our example yaml file runs in development mode but changing it to production mode is as simple as commenting out the line using `runserver` and uncommenting the line using `gunicorn`.
Both are set to run on port `0.0.0.0:8000`, which is where the Docker daemon will discover it. You can now run `sudo docker-compose up` and browse to `localhost:8000` to see your application running.
Both are set to run on port `0.0.0.0:8000`, which is where the Docker daemon will discover it. You can now run `sudo docker-compose -f production.yml up` and browse to `localhost:8000` to see your application running.
Deployment
^^^^^^^^^^
@ -155,7 +155,7 @@ That Dockerfile assumes you have an Nginx conf file named `site.conf` in the sam
}
}
Running `sudo docker-compose build webserver` will build your server container. Running `sudo docker-compose up` will now expose your application directly on `localhost` (no need to specify the port number).
Running `sudo docker-compose -f production.yml build webserver` will build your server container. Running `sudo docker-compose -f production.yml up` will now expose your application directly on `localhost` (no need to specify the port number).
Building and running your app on EC2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -166,9 +166,9 @@ All you now need to do to run your app in production is:
* Install your preferred source control solution, Docker and Docker compose on the news instance.
* Pull in your code from source control. The root directory should be the one with your `docker-compose.yml` file in it.
* Pull in your code from source control. The root directory should be the one with your `production.yml` file in it.
* Run `sudo docker-compose build` and `sudo docker-compose up`.
* Run `sudo docker-compose -f production.yml build` and `sudo docker-compose -f production.yml up`.
* Assign an `Elastic IP address`_ to your new machine.