Skip to content

dsizomin/homelab-iac

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Homelab Infrastructure as Code

This repository contains the complete Infrastructure as Code (IaC) configuration for my homelab, built with Terraform and Terragrunt. It manages everything from VM provisioning on Proxmox to containerized application deployments via Portainer.

🏗️ Architecture Overview

The homelab is organized into three main layers:

  1. Infrastructure Layer (live/infra/) - Proxmox VMs and base infrastructure
  2. Container Layer (live/docker/) - Docker networks, images, and services
  3. Application Layer (live/portainer/) - Self-hosted applications running in Docker Swarm

🛠️ Technology Stack

  • Infrastructure Orchestration: Terragrunt
  • Infrastructure Provisioning: Terraform
  • Virtualization Platform: Proxmox VE
  • Container Management: Portainer (Docker Swarm mode)
  • Secrets Management: SOPS with age encryption
  • State Storage: S3-compatible backend

Terraform Providers

⚙️ Configuration Management

The homelab uses centralized configuration modules to ensure consistency across all services:

DNS/FQDN Configuration (live/config/dns/)

All service domain names are centrally managed in the DNS configuration module. This provides:

  • Single Source of Truth: All FQDNs are defined in one place (live/config/dns/terragrunt.hcl)
  • Consistency: Services reference domain names via variables instead of hardcoded literals
  • Easy Updates: Change domain names once, propagate everywhere automatically
  • Type Safety: Terraform validates FQDN usage across all modules

Services access DNS configuration through the dns_config variable:

dns_config = {
  zone     = "denyssizomin.com"
  services = {
    auth      = "auth.denyssizomin.com"
    pulse     = "pulse.denyssizomin.com"
    paperless = "paperless.denyssizomin.com"
    gist      = "gist.denyssizomin.com"
    # ... other services
  }
  email = "[email protected]"
}

Example: Caddyfile Templating

The Caddyfile for Caddy reverse proxy is dynamically generated using the DNS config:

# live/docker/images/caddy/terragrunt.hcl
dependency "dns_config" {
  config_path = "../../../config/dns"
}

locals {
  caddyfile_content = templatefile("${get_terragrunt_dir()}/Caddyfile.tpl", {
    email          = local.dns_config.email
    auth_fqdn      = local.dns_config.services.auth
    paperless_fqdn = local.dns_config.services.paperless
    # ... other services
  })
}

This ensures all service domains in the reverse proxy configuration stay synchronized with the central DNS config.

OIDC Configuration (live/config/oidc/)

Centralized OIDC provider client IDs for Authentik SSO integration.

AdGuard DNS Configuration (live/adguard/)

The homelab includes redundant AdGuard Home DNS servers for network-wide ad blocking and DNS management:

Architecture

  • Primary DNS Server (live/adguard/primary/) - Main AdGuard Home instance
  • Secondary DNS Server (live/adguard/secondary/) - Redundant backup instance
  • Both instances are managed identically via Terraform using the same module (modules/adguard/)

DNS Filtering

Three curated blocklists are automatically configured on both servers:

  1. AdGuard DNS filter - AdGuard's official filter list
  2. AdAway Default Blocklist - Community-maintained mobile ad blocking
  3. Hagezi Pro++ - Comprehensive protection against ads, trackers, and malware

Upstream DNS Configuration

AdGuard uses Quad9 as the upstream DNS provider with multiple secure protocols:

  • DNSCrypt (sdns://...) - Encrypted DNS with authentication
  • DNS-over-HTTPS (https://dns11.quad9.net/dns-query)
  • DNS-over-TLS (tls://dns11.quad9.net)

Bootstrap DNS servers (for resolving secure DNS endpoints):

  • 9.9.9.9 and 149.112.112.11 (IPv4)
  • 2620:fe::11 and 2620:fe::fe:11 (IPv6)

Local Network Integration

  • Reverse DNS (PTR): Configured to use local router (192.168.1.1:53) for reverse lookups
  • Wildcard DNS Rewrite: All *.denyssizomin.com domains automatically resolve to the reverse proxy IP
  • Centralized Configuration: Domain zones and service FQDNs pulled from the centralized DNS config module

This setup ensures:

  • Network-wide ad and tracker blocking
  • Encrypted DNS queries to upstream providers
  • High availability with redundant DNS servers
  • Automatic service discovery via wildcard DNS rewriting

Vaultwarden Password Manager (live/portainer/vaultwarden/)

The homelab includes Vaultwarden, a lightweight alternative implementation of the Bitwarden password manager, configured for SSO-only authentication:

SSO-Only Mode

Vaultwarden is configured to operate exclusively in SSO mode with Authentik OIDC integration:

  • SSO-Only Authentication: Users can only log in through Authentik SSO, eliminating traditional username/password authentication
  • OIDC Integration: Seamlessly integrated with Authentik identity provider using OpenID Connect protocol
  • Enhanced Security: Centralized authentication through Authentik provides:
    • Single sign-on across all homelab services
    • Multi-factor authentication (MFA) enforcement
    • Centralized user management and access control
    • Session management and security policies

Configuration

The Vaultwarden deployment includes:

  • Docker Image: vaultwarden/server:testing-alpine - lightweight Alpine Linux-based container
  • OIDC Scopes: openid email profile offline_access for complete user profile access
  • Secret Management: OIDC client secret stored as Docker secret and rotated via Terraform lifecycle management
  • Network Integration: Connected to the reverse proxy network for automatic TLS termination via Caddy
  • Data Persistence: Vault data stored in /srv/data/vaultwarden for backup and recovery

Security Features

  • No Password Database: SSO-only mode means no local password database for authentication
  • Centralized Access Control: All access decisions managed through Authentik
  • Encrypted Secrets: OIDC client secrets encrypted and managed via Terraform
  • TLS Termination: All traffic encrypted via Caddy reverse proxy with automatic certificate renewal
  • Domain: Accessible at vault.denyssizomin.com with automatic DNS resolution

This configuration provides a secure, enterprise-grade password management solution with minimal operational overhead and maximum security through centralized identity management.

Automated Backup System (live/portainer/autorestic/)

The homelab includes an automated backup system using Autorestic, a wrapper around restic that provides declarative backup configuration and scheduling:

Architecture

  • Backup Tool: Autorestic - Declarative backup configuration wrapper for restic
  • Storage Backend: Cloudflare R2 (S3-compatible object storage)
  • Scheduling: Integrated with swarm-cronjob for automated hourly backups
  • Applications Backed Up:
    • Paperless-ngx: Document exports triggered before backup (hourly via cron)
    • Vaultwarden: Vault data backups triggered before backup (hourly via cron)

Backup Configuration

Retention Policy (applied globally to all backups):

  • Keep Last: 5 snapshots (always maintain at least 5 most recent backups)
  • Hourly: 3 snapshots (last 3 hourly backups)
  • Daily: 4 snapshots (last 4 daily backups)
  • Weekly: 1 snapshot (last weekly backup)
  • Monthly: 12 snapshots (last 12 monthly backups)
  • Yearly: 7 snapshots (last 7 yearly backups)
  • Keep Within: 14 days (all snapshots from last 14 days)

Backup Locations:

  • /srv/data/paperless - Paperless document exports
  • /srv/data/vaultwarden - Vaultwarden vault data

Backup Flow:

  1. Pre-backup Jobs (every hour at minute 0):
    • Paperless: Triggers document export via document_exporter command
    • Vaultwarden: Triggers vault backup via /vaultwarden backup command
  2. Upload Job (every hour at minute 30):
    • Autorestic reads configuration and encrypted credentials
    • Backs up all locations to Cloudflare R2
    • Applies retention policy and prunes old snapshots

Security

  • Encrypted Credentials: R2 access credentials stored as Docker secrets, encrypted with SOPS
  • Read-Only Mounts: Backup directories mounted read-only in the upload container
  • Network Isolation: Backup jobs run in isolated cronjob network
  • Immutable Storage: R2 provides versioned, immutable object storage

This automated backup system ensures critical homelab data is regularly backed up to cloud storage with a comprehensive retention policy, all managed declaratively through Terraform.

Cron Job Management (live/portainer/cronjob/)

The homelab uses a centralized cron job management system for scheduled tasks in Docker Swarm:

Swarm Cronjob Hypervisor

  • Image: crazymax/swarm-cronjob
  • Purpose: Enables cron-like scheduled job execution in Docker Swarm mode
  • Deployment: Runs on manager node with access to Docker socket
  • Network: Isolated cronjob_network for all scheduled jobs

How It Works

The swarm-cronjob hypervisor monitors Docker services with special labels and creates one-time tasks based on cron schedules:

deploy:
  replicas: 0  # Service stays dormant
  labels:
    - "swarm.cronjob.enable=true"
    - "swarm.cronjob.schedule=0 */1 * * *"  # Cron format
    - "swarm.cronjob.skip-running=true"     # Skip if previous run still active

Scheduled Jobs

Current cron jobs managed by the hypervisor:

  1. Paperless Document Export (hourly at minute 0)

    • Executes document exporter inside Paperless container
    • Prepares data for backup
  2. Vaultwarden Backup (hourly at minute 0)

    • Triggers vault backup inside Vaultwarden container
    • Prepares vault data for backup
  3. Autorestic Upload (hourly at minute 30)

    • Uploads all backup data to Cloudflare R2
    • Applies retention policies

Benefits

  • Declarative Scheduling: Cron schedules defined as Docker labels in Compose files
  • Swarm-Native: Works with Docker Swarm's orchestration and placement constraints
  • Skip Logic: Prevents overlapping job executions with skip-running flag
  • Zero Replicas: Jobs don't consume resources until executed
  • Centralized Management: All scheduled tasks visible and manageable through Portainer

This approach provides reliable, container-native scheduled task execution without requiring external cron daemons or additional infrastructure.

System Monitoring & Notifications (live/portainer/pulse/)

The homelab includes Pulse for system monitoring with integrated notification capabilities via Apprise:

Pulse Monitoring

  • Image: rcourtman/pulse:latest - Modern system monitoring dashboard
  • Authentication: Integrated with Authentik SSO via OIDC
  • Features:
    • Real-time system metrics (CPU, memory, disk, network)
    • Service health monitoring
    • Alert configuration and management
    • Historical data visualization
  • Domain: Accessible at pulse.denyssizomin.com

Apprise Integration

Pulse is integrated with Apprise API for flexible notification delivery:

  • Image: lscr.io/linuxserver/apprise-api:latest
  • Purpose: Universal notification gateway supporting 90+ services
  • Network Architecture:
    • Internal network (apprise_network) connecting Pulse to Apprise
    • Separate proxy network for external access if needed
  • Supported Notification Channels:
    • Email (SMTP, SendGrid, Mailgun, etc.)
    • Messaging (Slack, Discord, Telegram, Matrix, etc.)
    • Push Notifications (Pushover, Pushbullet, Gotify, etc.)
    • SMS (Twilio, AWS SNS, etc.)
    • And 80+ other services

Configuration

  • Apprise Config: Stored in persistent volume (/config)
  • Plugins: Custom notification plugins can be added via /plugins volume
  • Attachments: Support for sending file attachments via /attachments volume
  • Timezone: Europe/Amsterdam (consistent with other services)

Use Cases

  • System Alerts: CPU/memory/disk threshold warnings
  • Service Down Alerts: Notification when monitored services become unavailable
  • Backup Notifications: Alert on backup success/failure (future integration)
  • Security Events: Authentication failures, suspicious activity alerts

This monitoring setup provides comprehensive visibility into homelab health with flexible, multi-channel alerting capabilities, all accessible through a modern SSO-protected web interface.

Grafana Alloy Observability (live/portainer/alloy/)

The homelab uses Grafana Alloy as the unified observability collector, providing comprehensive metrics and log collection with Grafana Cloud integration:

Architecture

  • Collector: Grafana Alloy - OpenTelemetry-compatible observability collector
  • Configuration Management: Grafana Fleet Management for remote pipeline configuration
  • Metrics Backend: Grafana Cloud Prometheus
  • Logs Backend: Grafana Cloud Loki
  • Deployment: Docker Swarm service with Docker socket access for container discovery

Fleet Management Pipelines

Alloy uses Grafana Fleet Management for centralized, remote configuration of collection pipelines:

  1. Self Monitoring (self.alloy)

    • Exports Alloy's own health metrics (CPU, memory, component status)
    • Collects Alloy container logs
    • Job label: integrations/alloy (compatible with Grafana Cloud Alloy Health dashboard)
  2. Docker Swarm Logging (logging_docker.alloy)

    • Discovers all Docker Swarm services automatically
    • Collects container logs from all services (except Alloy itself to avoid duplication)
    • Drops logs older than 1 minute to prevent backlog issues
    • Labels logs with service name and hostname
  3. Traefik Metrics (traefik_prom.alloy)

    • Discovers Traefik reverse proxy service
    • Scrapes Prometheus metrics from Traefik's metrics endpoint
    • Relabels HTTP status codes for grouping (e.g., 2002**)
    • Provides request/response metrics, latency, and error rates

Configuration

  • Base Config (config.alloy):

    • Live debugging enabled for real-time troubleshooting
    • Remote configuration from Grafana Fleet Management
    • Platform attribute: docker.swarm for pipeline matching
  • Pipeline Matchers:

    • self.alloy: Matches all collectors (collector.os=~".*")
    • logging_docker.alloy: Matches Docker platforms (platform=~"^docker.*")
    • traefik_prom.alloy: Matches Docker platforms (platform=~"^docker.*")

Network Integration

  • Service Network: Access to internal services for metrics scraping
  • Proxy Network: Traefik integration for web UI access at alloy.denyssizomin.com
  • Docker Socket: Read-only access for container discovery

Security

  • Grafana API Key: Stored as Docker secret, encrypted with SOPS
  • Read-Only Docker Socket: Container discovery without modification permissions
  • Terraform Lifecycle Management: Secrets rotated on configuration changes

Collected Data

Type Source Destination
Metrics Alloy health Grafana Cloud Prometheus
Metrics Traefik proxy Grafana Cloud Prometheus
Logs All Docker Swarm services Grafana Cloud Loki
Logs Alloy container Grafana Cloud Loki

This observability stack provides centralized monitoring and logging for the entire homelab infrastructure, with Grafana Cloud handling storage, visualization, and alerting.

📁 Repository Structure

.
├── live/                          # Live environment configurations
│   ├── root.hcl                   # Root Terragrunt config with S3 backend
│   ├── config/                    # Configuration management
│   │   ├── dns/                   # Centralized DNS/FQDN configuration
│   │   └── oidc/                  # OIDC provider configurations
│   ├── adguard/                   # AdGuard DNS servers
│   │   ├── primary/               # Primary AdGuard Home instance
│   │   └── secondary/             # Secondary AdGuard Home instance
│   ├── infra/                     # Infrastructure layer
│   │   ├── providers.hcl          # Proxmox provider configuration
│   │   └── vms/                   # Virtual machine definitions
│   │       ├── docker-apps/       # VM for Docker applications
│   │       ├── homeassistant/     # Home Assistant VM
│   │       └── workbench/         # Development workbench VM
│   ├── docker/                    # Docker infrastructure
│   │   ├── networks/              # Docker networks
│   │   │   └── proxy/             # Reverse proxy network
│   │   ├── images/                # Custom Docker images
│   │   │   └── caddy/             # Custom Caddy image
│   │   └── services/              # Docker services
│   │       └── portainer/         # Portainer service
│   └── portainer/                 # Application deployments
│       ├── providers.hcl          # Portainer provider configuration
│       ├── admin/                 # Portainer admin settings
│       ├── settings/              # Portainer settings
│       ├── authentik/             # SSO & Identity Provider
│       ├── autorestic/            # Automated backup system
│       ├── caddy/                 # Reverse proxy & TLS termination
│       ├── cronjob/               # Cron job management (swarm-cronjob)
│       ├── ddns/                  # Dynamic DNS updater
│       ├── miniserve/             # Simple file server
│       ├── opengist/              # Code snippet sharing
│       ├── paperless/             # Document management system
│       ├── pulse/                 # System monitoring with Apprise
│       └── vaultwarden/           # Password manager
├── modules/                       # Reusable Terraform modules
│   ├── authentik/                 # Authentik configuration modules
│   │   └── oidc_provider/         # OIDC provider module
│   ├── config/                    # Configuration modules
│   │   ├── dns/                   # DNS configuration module
│   │   └── oidc/                  # OIDC configuration
│   ├── docker/                    # Docker resource modules
│   ├── infra/                     # Infrastructure modules
│   │   ├── cloud-init/            # Cloud-init configuration
│   │   └── vms/                   # VM templates
│   │       ├── debian-vm/         # Debian VM module
│   │       └── hass-vm/           # Home Assistant VM module
│   └── portainer/                 # Application stack modules
│       ├── admin/                 # Admin configuration
│       ├── authentik/             # Authentik stack
│       ├── autorestic/            # Automated backup module
│       ├── caddy/                 # Caddy reverse proxy
│       ├── cronjob/               # Cron job management module
│       ├── ddns/                  # DDNS client
│       ├── miniserve/             # File server
│       ├── opengist/              # Gist platform
│       ├── paperless/             # Document management
│       ├── pulse/                 # Monitoring with Apprise
│       ├── settings/              # Portainer settings
│       └── vaultwarden/           # Password manager module
├── .sops.yaml                     # SOPS encryption configuration
└── sops.env                       # Encrypted environment variables

🚀 Getting Started

Prerequisites

Ensure you have the following tools installed:

  • Terraform (>= 1.6)
  • Terragrunt (>= 0.50)
  • SOPS (for secrets management)
  • age (for SOPS encryption)
  • SSH key pair for Proxmox access (~/.ssh/homelab)

Environment Setup

  1. Clone the repository:

    git clone <repository-url>
    cd homelab-iac
  2. Configure secrets:

    Create/update sops.env with required credentials:

    # Decrypt (if exists)
    sops sops.env
    
    # Add required variables:
    # - PROXMOX_ENDPOINT
    # - PROXMOX_API_TOKEN
    # - PORTAINER_API_KEY
    # - S3 backend credentials (if using)
  3. Source environment variables:

    source <(sops -d sops.env)
  4. Initialize the infrastructure:

    # From the project root
    cd live/infra/vms/docker-apps
    terragrunt init

🔧 Usage

Managing Infrastructure

Deploy a specific module:

cd live/portainer/authentik
terragrunt apply

Deploy all modules in a directory:

cd live/portainer
terragrunt run-all apply

Plan changes before applying:

terragrunt plan

Destroy resources:

terragrunt destroy

Working with SOPS

Encrypt a new file:

sops -e file.yaml > encrypted.yaml

Edit encrypted file:

sops file.yaml

Decrypt and view:

sops -d file.yaml

Dependency Graph

The infrastructure follows this deployment order:

  1. Proxmox VMs (live/infra/vms/*)
  2. Docker Infrastructure (live/docker/*)
  3. Portainer Service (live/docker/services/portainer)
  4. Application Stacks (live/portainer/*)

Terragrunt automatically handles dependencies between modules using dependency blocks.

📦 Deployed Applications

Application Description Module Path
AdGuard Home (Primary) Network-wide DNS & Ad Blocking adguard/primary
AdGuard Home (Secondary) Redundant DNS Server adguard/secondary
Authentik Identity Provider & SSO portainer/authentik
Caddy Reverse Proxy & TLS portainer/caddy
Vaultwarden Password Manager (SSO-only) portainer/vaultwarden
Paperless-ngx Document Management portainer/paperless
OpenGist Code Snippet Sharing portainer/opengist
Miniserve Simple File Server portainer/miniserve
Pulse System Monitoring portainer/pulse
Grafana Alloy Observability Collector portainer/alloy
DDNS Dynamic DNS Client portainer/ddns
Home Assistant Home Automation infra/vms/homeassistant

🔐 Security

  • Secrets Management: All sensitive data is encrypted using SOPS with age encryption
  • API Keys: Stored in encrypted sops.env and passed via environment variables
  • SSH Keys: Used for Proxmox authentication (~/.ssh/homelab)
  • State Backend: Terraform state is stored remotely in S3-compatible storage
  • TLS: Caddy handles automatic certificate provisioning and renewal

Age Key Management

Your age public key is configured in .sops.yaml. Keep your private age key secure:

# Default location
~/.config/sops/age/keys.txt

🏃 Continuous Deployment

This setup is designed for GitOps-style deployments:

  1. Make changes to configuration files
  2. Commit and push to version control
  3. Run terragrunt apply in the relevant directory
  4. Changes are automatically propagated to the infrastructure

📝 License

This project is provided as-is for educational and personal use.

🔗 Resources

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages