Skip to main content

Dockerize a Django App with Postgres and Redis

Devin writes a multi-stage Dockerfile, a docker-compose.yml with Django, PostgreSQL, and Redis, then builds and runs the stack to verify it works.
AuthorCognition
CategoryFeature Development
1

(Optional) Scope the project with Ask Devin

If you’re not sure which services your Django app depends on or how the project is structured, use Ask Devin to investigate first:You can also start a Devin session directly from Ask Devin, and it will carry over everything it learned as context.
2

Give Devin your Django project and requirements

Point Devin at the Django project to containerize and mention anything specific — base image preferences, services the app depends on, or image size constraints. Devin reads your requirements.txt or pyproject.toml to figure out the rest.
3

Devin investigates and builds

Devin reads your Django project and dependency files to understand the build process, then creates the Docker configuration:
  1. Reads dependency files — Parses requirements.txt or pyproject.toml, identifies Django, psycopg2, redis, celery, and gunicorn
  2. Writes the Dockerfile — Creates a multi-stage build that installs dependencies in a builder stage and copies the virtual environment into a minimal python:3.12-slim runtime image
# ---- Builder ----
FROM python:3.12-slim AS builder
WORKDIR /app
COPY requirements.txt .
RUN python -m venv /opt/venv \
    && /opt/venv/bin/pip install --no-cache-dir -r requirements.txt

# ---- Runtime ----
FROM python:3.12-slim
RUN groupadd -r django && useradd -r -g django django
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /app
COPY src/ .
RUN python manage.py collectstatic --noinput
EXPOSE 8000
USER django
CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "3"]
  1. Writes docker-compose.yml — Adds Django, PostgreSQL, Redis, and a Celery worker with health checks, volumes, and a shared network
services:
  web:
    build: .
    ports:
      - "8000:8000"
    env_file: .env
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/health/')"]
      interval: 10s
      retries: 3

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: app_db
      POSTGRES_USER: app_user
      POSTGRES_PASSWORD: app_pass
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app_user -d app_db"]
      interval: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      retries: 5

  celery:
    build: .
    command: celery -A config worker -l info
    env_file: .env
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy

volumes:
  pgdata:
  1. Adds .dockerignore — Excludes files that don’t belong in the build context (__pycache__, .git, tests/, docs/, *.pyc)
  2. Runs docker compose up --build — Builds the image and starts all services in Devin’s terminal
  3. Verifies the app — Curls /api/health/ to confirm Django starts cleanly, connects to PostgreSQL, and reaches Redis
4

Guide the session with slash commands

Use slash commands during the session to steer Devin’s workflow:
  • /plan — Ask Devin to outline its approach before writing any Docker configuration. Review the plan and suggest changes.
  • /test — Tell Devin to rebuild and re-verify the container stack. Use this after each change to catch issues early.
  • /review — Ask Devin to review its own Dockerfile and compose config for security issues, image size, and best practices before opening the PR.
5

Verify and iterate

Once Devin opens the PR, review the generated files. Common follow-ups:
6

Review the PR with Devin Review

Once Devin opens the PR, use Devin Review to review the Docker configuration. Devin Review can catch security issues (running as root, exposed secrets), missing best practices (no .dockerignore, no health checks), and inconsistencies with your existing infrastructure.