Complete CI/CD Pipeline with Docker, GitHub Actions, and Hetzner Deployment

Setting up a robust CI/CD pipeline is essential for modern application development. In this tutorial, we’ll walk through building a complete continuous integration and deployment system using GitHub Actions, Docker containers, Alembic database migrations, and Hetzner Cloud infrastructure.

Architecture Overview

Our CI/CD pipeline includes:

  • GitHub Actions for automation and orchestration
  • Docker & Docker Compose for containerization
  • Alembic for database schema migrations
  • NGINX as a reverse proxy
  • Hetzner Cloud as our deployment target
  • FastAPI backend with React frontend (example application)

Prerequisites

Before starting, ensure you have:

  • A GitHub repository for your application
  • A Hetzner Cloud account and server
  • Docker and Docker Compose installed locally for testing
  • SSH access to your deployment server

Setting Up Your Hetzner Server

When creating your Hetzner Cloud server:

  1. Create a new server in the Hetzner Cloud Console
  2. Choose your location (closest to your users)
  3. Select an image: Ubuntu 22.04 or 24.04 LTS
  4. Important: Under “Apps”, select Docker CE - this pre-installs Docker and Docker Compose
  5. Add your SSH key for secure access
  6. Create the server

Using the Docker CE app saves setup time and ensures Docker is properly configured from the start.

Part 1: Containerizing Your Application

Backend Dockerfile

First, let’s create a production-ready Dockerfile for our Python/FastAPI backend:

# Use Python 3.10 slim image
FROM python:3.10-slim

# Set working directory
WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    curl \
    wget \
    ca-certificates \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements first for better caching
COPY requirements.txt .

# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create directory for SQLite database
RUN mkdir -p /app/data

# Expose port
EXPOSE 8000

# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV DATABASE_PATH=/app/data/compliance.db

# Run the application
CMD ["uvicorn", "api.main:app", "--host", "0.0.0.0", "--port", "8000"]

Key Points:

  • Multi-stage approach with system dependencies first
  • Layer caching optimization by copying requirements.txt before code
  • Unprivileged port exposure
  • Environment variables for configuration

Docker Compose Production Configuration

Create docker-compose.prod.yml to orchestrate your services:

services:
  # Backend Server
  server:
    build:
      context: ./server
      dockerfile: Dockerfile
    container_name: myapp-server
    expose:
      - "8000"
    environment:
      - ENVIRONMENT=production
      - JWT_SECRET_KEY=${JWT_SECRET_KEY}
      - SOME_API_KEY=${SOME_API_KEY}
      - FRONTEND_URL=${FRONTEND_URL:-http://localhost}
      - DATABASE_PATH=/app/data/app.db
    volumes:
      # Persist database
      - ./server/data:/app/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    networks:
      - app-network

  # NGINX Reverse Proxy (serves static files + proxies API)
  nginx:
    image: nginx:alpine
    container_name: app-nginx
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.prod.conf:/etc/nginx/conf.d/default.conf:ro
      - ./client/dist:/usr/share/nginx/html:ro
    depends_on:
      - server
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

Architecture Benefits:

  • Isolated network for service communication
  • Health checks for monitoring
  • Volume persistence for database
  • Automatic restarts on failure

NGINX Reverse Proxy Configuration

Create nginx/nginx.prod.conf:

upstream server {
    server server:8000;
}

server {
    listen 80;
    server_name localhost;

    # Increase buffer sizes for larger requests
    client_max_body_size 50M;
    client_body_buffer_size 128k;

    # Serve static frontend files
    root /usr/share/nginx/html;
    index index.html;

    # API routes - proxy to backend server
    location /api/ {
        proxy_pass http://server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;

        # Timeouts for long-running operations
        proxy_connect_timeout 300s;
        proxy_send_timeout 300s;
        proxy_read_timeout 300s;
    }

    # Health check endpoint
    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }

    # Frontend routes - serve static files with SPA fallback
    location / {
        try_files $uri $uri/ /index.html;
    }

    # Cache static assets
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
    }
}

NGINX Features:

  • Reverse proxy for API requests
  • Static file serving for frontend
  • SPA routing support with fallback to index.html
  • Asset caching for performance
  • Health check endpoint

Part 2: Database Migrations with Alembic

Setting Up Alembic

Initialize Alembic in your project:

cd server
alembic init alembic

Alembic Configuration

Configure alembic.ini for your database:

[alembic]
script_location = alembic
prepend_sys_path = .
version_path_separator = os

# SQLite database URL (will be overridden by environment in Docker)
sqlalchemy.url = sqlite:////app/data/compliance.db

[loggers]
keys = root,sqlalchemy,alembic

[handlers]
keys = console

[formatters]
keys = generic

[logger_root]
level = WARN
handlers = console
qualname =

[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine

[logger_alembic]
level = INFO
handlers =
qualname = alembic

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

Creating Migrations

Create a new migration:

# Auto-generate migration from models
alembic revision --autogenerate -m "add_users_table"

# Or create empty migration for manual changes
alembic revision -m "seed_initial_data"

Example migration file structure:

"""add_users_table

Revision ID: abc123def456
Revises:
Create Date: 2025-01-13 10:00:00.000000

"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa

# revision identifiers, used by Alembic.
revision: str = 'abc123def456'
down_revision: Union[str, None] = None
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None

def upgrade() -> None:
    op.create_table(
        'users',
        sa.Column('id', sa.Integer(), nullable=False),
        sa.Column('email', sa.String(), nullable=False),
        sa.Column('hashed_password', sa.String(), nullable=False),
        sa.Column('created_at', sa.DateTime(), nullable=False),
        sa.PrimaryKeyConstraint('id')
    )
    op.create_index(op.f('ix_users_email'), 'users', ['email'], unique=True)

def downgrade() -> None:
    op.drop_index(op.f('ix_users_email'), table_name='users')
    op.drop_table('users')

Running Migrations in Docker

Migrations run automatically during deployment:

# Inside container
docker-compose -f docker-compose.prod.yml exec -T server alembic upgrade head

Part 3: GitHub Actions CI/CD Pipeline

Create .github/workflows/deploy.yml:

name: Deploy to Hetzner
on:
  push:
    branches: [main]
  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Test SSH Connectivity
      shell: bash
      run: |
        set -euo pipefail

        HOST="${{ secrets.SERVER_IP }}"

        echo "Testing TCP/22 to $HOST..."
        if timeout 5 bash -c "echo > /dev/tcp/${HOST}/22" 2>/dev/null; then
          echo "✅ Port 22 is open"
        else
          echo "❌ Port 22 is closed or filtered"
          exit 1
        fi

        echo "Testing SSH auth..."
        keyfile="$(mktemp)"
        trap 'rm -f "$keyfile"' EXIT
        printf '%s\n' "${{ secrets.SSH_PRIVATE_KEY }}" > "$keyfile"
        chmod 600 "$keyfile"

        if timeout 10 ssh -i "$keyfile" -o BatchMode=yes \
          -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
          root@"$HOST" "exit"; then
          echo "✅ SSH authentication OK"
        else
          echo "❌ SSH authentication failed"
          exit 1
        fi

    - name: Deploy via SSH
      uses: appleboy/ssh-action@v1.0.3
      env:
        TOKEN: ${{ secrets.TOKEN }}
        JWT_SECRET: ${{ secrets.JWT_SECRET_KEY }}
        SOME_API_KEY: ${{ secrets.SOME_API_KEY }}
        SERVER_IP: ${{ secrets.SERVER_IP }}
        SERVER_HOSTNAME: ${{ secrets.SERVER_HOSTNAME }}
      with:
        host: ${{ secrets.SERVER_IP }}
        username: root
        key: ${{ secrets.SSH_PRIVATE_KEY }}
        envs: TOKEN,JWT_SECRET,SOME_API_KEY,SERVER_IP,SERVER_HOSTNAME
        script: |
          set -e

          echo "🚀 Production Deployment Starting"

          # Colors for output
          RED='\033[0;31m'
          GREEN='\033[0;32m'
          BLUE='\033[0;34m'
          NC='\033[0m'

          info() { echo -e "${BLUE}[INFO]${NC} $1"; }
          success() { echo -e "${GREEN}[✓]${NC} $1"; }
          error() { echo -e "${RED}[✗]${NC} $1"; exit 1; }

          # Install system prerequisites
          info "Checking system prerequisites..."
          if ! command -v docker >/dev/null 2>&1; then
            info "Installing Docker..."
            curl -fsSL https://get.docker.com | sh
            systemctl start docker
            systemctl enable docker
          fi

          if ! command -v docker-compose >/dev/null 2>&1; then
            info "Installing docker-compose..."
            curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" \
              -o /usr/local/bin/docker-compose
            chmod +x /usr/local/bin/docker-compose
            export PATH=/usr/local/bin:$PATH
          fi

          apt update -qq && apt install -y git curl jq >/dev/null 2>&1 || true
          success "System prerequisites ready"

          # Setup GitHub authentication
          info "Configuring GitHub authentication..."
          export GIT_TERMINAL_PROMPT=0
          git config --global credential.helper ""

          # Clone/update repository
          APP_DIR="/root/myapp"
          if [ ! -d "$APP_DIR" ]; then
            info "🚀 First deployment - cloning repository..."
            if ! git clone "https://${TOKEN}@github.com/yourorg/yourapp.git" "$APP_DIR"; then
              error "Failed to clone repository"
            fi
            cd "$APP_DIR"
            success "Repository cloned"
          else
            info "🔄 Updating existing deployment..."
            cd "$APP_DIR"
            git reset --hard HEAD
            git clean -fd
            git remote set-url origin "https://${TOKEN}@github.com/yourorg/yourapp.git"
            git pull
            success "Repository updated"
          fi

          cd "$APP_DIR"

          # Create environment file
          info "📝 Creating production environment configuration..."

          if [ -n "$SERVER_HOSTNAME" ]; then
            FRONTEND_URL="http://${SERVER_HOSTNAME}"
          else
            FRONTEND_URL="http://${SERVER_IP}"
          fi

          cat > .env <<EOF
          JWT_SECRET_KEY=${JWT_SECRET}
          SOME_API_KEY=${SOME_API_KEY}
          FRONTEND_URL=${FRONTEND_URL}
          VITE_API_URL=
          EOF
          success "Environment configured (FRONTEND_URL=${FRONTEND_URL})"

          # Build client application
          info "📦 Building client application..."
          if ! command -v node >/dev/null 2>&1; then
            info "Installing Node.js..."
            curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
            apt-get install -y nodejs
          fi

          cd "$APP_DIR/client"
          npm ci --silent
          npm run build
          success "Client built successfully"

          cd "$APP_DIR"

          # Deploy containers
          info "🐳 Deploying application containers..."
          docker-compose -f docker-compose.prod.yml down 2>/dev/null || true
          docker system prune -f >/dev/null 2>&1 || true

          docker-compose -f docker-compose.prod.yml build --no-cache
          docker-compose -f docker-compose.prod.yml up -d
          success "Containers started"

          # Wait for services to initialize
          info "⏳ Waiting for services to initialize (10 seconds)..."
          sleep 10

          # Run database migrations
          info "🗄️ Running database migrations..."

          # Create backup of existing database
          if [ -f "$APP_DIR/server/data/compliance.db" ]; then
            info "Backing up existing database..."
            cp "$APP_DIR/server/data/compliance.db" \
              "$APP_DIR/server/data/compliance.db.backup-$(date +%Y%m%d-%H%M%S)" || true
          fi

          docker-compose -f docker-compose.prod.yml exec -T server alembic upgrade head
          success "Database migrations completed"

          # Restart server to reload with migrated database
          info "♻️ Restarting server..."
          docker-compose -f docker-compose.prod.yml restart server
          success "Server restarted"

          # Wait for services to stabilize
          info "⏳ Waiting for services to stabilize (15 seconds)..."
          sleep 15

          # Health checks
          info "🩺 Running health checks..."

          # Backend health check
          for i in {1..6}; do
            if curl -s --max-time 10 http://localhost/api/health >/dev/null 2>&1; then
              success "✅ Backend healthy"
              break
            elif [ $i -eq 6 ]; then
              error "❌ Backend health check failed"
            else
              info "Backend not ready, retrying... ($i/6)"
              sleep 10
            fi
          done

          # Frontend health check
          for i in {1..10}; do
            if curl -s --max-time 10 http://localhost/ >/dev/null 2>&1; then
              success "✅ Frontend healthy"
              break
            elif [ $i -eq 10 ]; then
              error "❌ Frontend health check failed"
            else
              info "Frontend not ready, retrying... ($i/10)"
              sleep 10
            fi
          done

          success "🎉 Deployment completed successfully!"
          echo ""
          echo "🌐 Application URLs:"
          echo "   Frontend:  http://$SERVER_IP"
          echo "   Backend:   http://$SERVER_IP/api/"
          echo "   API Docs:  http://$SERVER_IP/api/docs"
          echo "   Health:    http://$SERVER_IP/api/health"

Part 4: GitHub Secrets Configuration

Configure these secrets in your GitHub repository (Settings → Secrets and variables → Actions):

Secret Name Description Example
SERVER_IP Hetzner server IP address 255.255.255.255
SERVER_HOSTNAME Optional domain name app.example.com
SSH_PRIVATE_KEY SSH private key for deployment -----BEGIN OPENSSH PRIVATE KEY-----...
TOKEN GitHub personal access token (see below) ghp_xxxxxxxxxxxx
JWT_SECRET_KEY JWT signing secret Random 32+ character string
SOME_API_KEY API keys for external services your-api-key-here

Creating a GitHub Personal Access Token

Create a fine-grained token with minimal required permissions:

  1. Go to GitHubSettingsDeveloper settingsPersonal access tokensFine-grained tokens
  2. Click Generate new token
  3. Token name: Deploy to Hetzner - MyApp
  4. Expiration: Set to 90 days (set calendar reminder to rotate)
  5. Repository access: Select Only select repositories
    • Choose only the repository you’re deploying
  6. PermissionsRepository permissions:
    • Contents: Read-only (required for cloning)
    • Metadata: Read-only (automatically selected)
  7. Click Generate token
  8. Copy the token immediately - you won’t see it again
  9. Add to GitHub Secrets as TOKEN

Important Security Notes:

  • Fine-grained tokens are more secure than classic tokens - they’re scoped to specific repositories
  • This token only grants read access to your selected repository
  • Never commit tokens to your repository
  • Rotate tokens regularly (every 90 days recommended)
  • If the token expires, your deployments will fail until you create and configure a new one

Generating SSH Keys

On your local machine:

# Generate SSH key pair
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/deploy_key

# Copy public key to server
ssh-copy-id -i ~/.ssh/deploy_key.pub root@YOUR_SERVER_IP

# Copy private key content to GitHub secret
cat ~/.ssh/deploy_key

Part 5: Deployment Flow

Workflow Execution Steps

  1. SSH Connectivity Test

    • Verifies port 22 is accessible
    • Validates SSH authentication
    • Fails fast if server unreachable
  2. System Prerequisites

    • Installs Docker if not present
    • Installs Docker Compose
    • Installs git, curl, jq
  3. Repository Management

    • Clones on first deployment
    • Pulls latest changes on subsequent deployments
    • Uses GitHub token for authentication
  4. Environment Configuration

    • Creates .env file with secrets
    • Configures frontend URL dynamically
    • Sets up API keys and credentials
  5. Frontend Build

    • Installs Node.js if needed
    • Runs npm ci for clean install
    • Builds production bundle with Vite
  6. Container Deployment

    • Stops existing containers
    • Builds fresh images (no cache)
    • Starts containers in detached mode
  7. Database Migrations

    • Backs up existing database
    • Runs Alembic migrations
    • Restarts server with new schema
  8. Health Checks

    • Verifies backend responds at /api/health
    • Verifies frontend serves content
    • Retries with backoff on failure

Zero-Downtime Considerations

For true zero-downtime deployments, enhance with:

# Use blue-green deployment pattern
- name: Blue-Green Deployment
  run: |
    # Start new version on different port
    docker-compose -f docker-compose.new.yml up -d

    # Wait for health checks
    sleep 30

    # Switch NGINX upstream
    # Update NGINX config to point to new containers

    # Stop old version
    docker-compose -f docker-compose.old.yml down

Part 6: Monitoring and Troubleshooting

View Deployment Logs

In GitHub Actions:

  • Navigate to Actions tab
  • Click on latest workflow run
  • Expand deployment steps to view logs

Server-Side Debugging

SSH into your server:

# View container logs
docker-compose -f docker-compose.prod.yml logs -f

# Check container status
docker-compose -f docker-compose.prod.yml ps

# Inspect specific service
docker-compose -f docker-compose.prod.yml logs server

# Check database migrations
docker-compose -f docker-compose.prod.yml exec server alembic current

# Verify environment variables
docker-compose -f docker-compose.prod.yml exec server env

# Test health endpoints
curl http://localhost/api/health
curl http://localhost/health

Common Issues

Issue: SSH Authentication Failed

# Verify SSH key permissions
chmod 600 ~/.ssh/deploy_key

# Test SSH connection manually
ssh -i ~/.ssh/deploy_key root@YOUR_SERVER_IP

Issue: Docker Compose Not Found

# Ensure PATH includes /usr/local/bin
export PATH=/usr/local/bin:$PATH
echo 'export PATH=/usr/local/bin:$PATH' >> ~/.bashrc

Issue: Database Migration Failed

# Check migration history
docker-compose exec server alembic history

# Rollback one version
docker-compose exec server alembic downgrade -1

# Restore from backup
cp server/data/compliance.db.backup-20250113-100000 server/data/compliance.db

Part 7: Best Practices

Security

  1. Never commit secrets - Use GitHub Secrets or environment variables
  2. Rotate credentials regularly - Update SSH keys and API tokens periodically
  3. Use SSH keys, not passwords - Disable password authentication
  4. Implement HTTPS - Add SSL/TLS certificates with Let’s Encrypt
  5. Restrict firewall rules - Only allow necessary ports (22, 80, 443)

Performance

  1. Build caching - Leverage Docker layer caching
  2. Asset optimization - Minify and compress frontend assets
  3. Database indexing - Add indexes for frequently queried columns
  4. Connection pooling - Use SQLAlchemy connection pools

Reliability

  1. Health checks - Implement comprehensive health endpoints
  2. Automatic restarts - Use restart: unless-stopped in Docker Compose
  3. Database backups - Create backups before migrations
  4. Rollback strategy - Keep previous Docker images for quick rollback
  5. Monitoring - Set up alerts for deployment failures

Code Quality

# Run tests before deployment
- name: Run Tests
  run: |
    cd server
    pytest tests/

- name: Lint Code
  run: |
    cd server
    ruff check .
    black --check .

Conclusion

You now have a complete CI/CD pipeline that:

  • ✅ Automatically deploys on push to main
  • ✅ Containerizes your application with Docker
  • ✅ Manages database schema with Alembic migrations
  • ✅ Serves your application through NGINX reverse proxy
  • ✅ Runs health checks to verify deployment success
  • ✅ Creates database backups before migrations
  • ✅ Handles environment configuration securely

This pipeline provides a solid foundation for production deployments. As your application grows, you can extend it with additional features like:

  • Multi-environment deployments (staging, production)
  • Automated testing in the pipeline
  • Container registry for image storage
  • Kubernetes orchestration for scaling
  • Monitoring and alerting integrations
  • Blue-green or canary deployments

The combination of GitHub Actions, Docker, and Hetzner Cloud provides a cost-effective and powerful deployment solution for modern web applications.


About InFocus Data: We help organizations optimize their data infrastructure, implement DevOps best practices, and build scalable cloud solutions. Contact us for consulting on CI/CD pipelines, cloud migrations, and database optimization.