This repository contains the complete Infrastructure as Code (IaC) configuration for my homelab, built with Terraform and Terragrunt. It manages everything from VM provisioning on Proxmox to containerized application deployments via Portainer.
The homelab is organized into three main layers:
- Infrastructure Layer (
live/infra/) - Proxmox VMs and base infrastructure - Container Layer (
live/docker/) - Docker networks, images, and services - Application Layer (
live/portainer/) - Self-hosted applications running in Docker Swarm
- Infrastructure Orchestration: Terragrunt
- Infrastructure Provisioning: Terraform
- Virtualization Platform: Proxmox VE
- Container Management: Portainer (Docker Swarm mode)
- Secrets Management: SOPS with age encryption
- State Storage: S3-compatible backend
- bpg/proxmox - Proxmox infrastructure management
- portainer/portainer - Container stack deployment
- kreuzwerker/docker - Docker resources
- goauthentik/authentik - SSO/Identity management
- gmichels/adguard - AdGuard Home DNS management
The homelab uses centralized configuration modules to ensure consistency across all services:
All service domain names are centrally managed in the DNS configuration module. This provides:
- Single Source of Truth: All FQDNs are defined in one place (
live/config/dns/terragrunt.hcl) - Consistency: Services reference domain names via variables instead of hardcoded literals
- Easy Updates: Change domain names once, propagate everywhere automatically
- Type Safety: Terraform validates FQDN usage across all modules
Services access DNS configuration through the dns_config variable:
dns_config = {
zone = "denyssizomin.com"
services = {
auth = "auth.denyssizomin.com"
pulse = "pulse.denyssizomin.com"
paperless = "paperless.denyssizomin.com"
gist = "gist.denyssizomin.com"
# ... other services
}
email = "[email protected]"
}Example: Caddyfile Templating
The Caddyfile for Caddy reverse proxy is dynamically generated using the DNS config:
# live/docker/images/caddy/terragrunt.hcl
dependency "dns_config" {
config_path = "../../../config/dns"
}
locals {
caddyfile_content = templatefile("${get_terragrunt_dir()}/Caddyfile.tpl", {
email = local.dns_config.email
auth_fqdn = local.dns_config.services.auth
paperless_fqdn = local.dns_config.services.paperless
# ... other services
})
}This ensures all service domains in the reverse proxy configuration stay synchronized with the central DNS config.
Centralized OIDC provider client IDs for Authentik SSO integration.
The homelab includes redundant AdGuard Home DNS servers for network-wide ad blocking and DNS management:
- Primary DNS Server (
live/adguard/primary/) - Main AdGuard Home instance - Secondary DNS Server (
live/adguard/secondary/) - Redundant backup instance - Both instances are managed identically via Terraform using the same module (
modules/adguard/)
Three curated blocklists are automatically configured on both servers:
- AdGuard DNS filter - AdGuard's official filter list
- AdAway Default Blocklist - Community-maintained mobile ad blocking
- Hagezi Pro++ - Comprehensive protection against ads, trackers, and malware
AdGuard uses Quad9 as the upstream DNS provider with multiple secure protocols:
- DNSCrypt (
sdns://...) - Encrypted DNS with authentication - DNS-over-HTTPS (
https://dns11.quad9.net/dns-query) - DNS-over-TLS (
tls://dns11.quad9.net)
Bootstrap DNS servers (for resolving secure DNS endpoints):
9.9.9.9and149.112.112.11(IPv4)2620:fe::11and2620:fe::fe:11(IPv6)
- Reverse DNS (PTR): Configured to use local router (
192.168.1.1:53) for reverse lookups - Wildcard DNS Rewrite: All
*.denyssizomin.comdomains automatically resolve to the reverse proxy IP - Centralized Configuration: Domain zones and service FQDNs pulled from the centralized DNS config module
This setup ensures:
- Network-wide ad and tracker blocking
- Encrypted DNS queries to upstream providers
- High availability with redundant DNS servers
- Automatic service discovery via wildcard DNS rewriting
The homelab includes Vaultwarden, a lightweight alternative implementation of the Bitwarden password manager, configured for SSO-only authentication:
Vaultwarden is configured to operate exclusively in SSO mode with Authentik OIDC integration:
- SSO-Only Authentication: Users can only log in through Authentik SSO, eliminating traditional username/password authentication
- OIDC Integration: Seamlessly integrated with Authentik identity provider using OpenID Connect protocol
- Enhanced Security: Centralized authentication through Authentik provides:
- Single sign-on across all homelab services
- Multi-factor authentication (MFA) enforcement
- Centralized user management and access control
- Session management and security policies
The Vaultwarden deployment includes:
- Docker Image:
vaultwarden/server:testing-alpine- lightweight Alpine Linux-based container - OIDC Scopes:
openid email profile offline_accessfor complete user profile access - Secret Management: OIDC client secret stored as Docker secret and rotated via Terraform lifecycle management
- Network Integration: Connected to the reverse proxy network for automatic TLS termination via Caddy
- Data Persistence: Vault data stored in
/srv/data/vaultwardenfor backup and recovery
- No Password Database: SSO-only mode means no local password database for authentication
- Centralized Access Control: All access decisions managed through Authentik
- Encrypted Secrets: OIDC client secrets encrypted and managed via Terraform
- TLS Termination: All traffic encrypted via Caddy reverse proxy with automatic certificate renewal
- Domain: Accessible at
vault.denyssizomin.comwith automatic DNS resolution
This configuration provides a secure, enterprise-grade password management solution with minimal operational overhead and maximum security through centralized identity management.
The homelab includes an automated backup system using Autorestic, a wrapper around restic that provides declarative backup configuration and scheduling:
- Backup Tool: Autorestic - Declarative backup configuration wrapper for restic
- Storage Backend: Cloudflare R2 (S3-compatible object storage)
- Scheduling: Integrated with swarm-cronjob for automated hourly backups
- Applications Backed Up:
- Paperless-ngx: Document exports triggered before backup (hourly via cron)
- Vaultwarden: Vault data backups triggered before backup (hourly via cron)
Retention Policy (applied globally to all backups):
- Keep Last: 5 snapshots (always maintain at least 5 most recent backups)
- Hourly: 3 snapshots (last 3 hourly backups)
- Daily: 4 snapshots (last 4 daily backups)
- Weekly: 1 snapshot (last weekly backup)
- Monthly: 12 snapshots (last 12 monthly backups)
- Yearly: 7 snapshots (last 7 yearly backups)
- Keep Within: 14 days (all snapshots from last 14 days)
Backup Locations:
/srv/data/paperless- Paperless document exports/srv/data/vaultwarden- Vaultwarden vault data
Backup Flow:
- Pre-backup Jobs (every hour at minute 0):
- Paperless: Triggers document export via
document_exportercommand - Vaultwarden: Triggers vault backup via
/vaultwarden backupcommand
- Paperless: Triggers document export via
- Upload Job (every hour at minute 30):
- Autorestic reads configuration and encrypted credentials
- Backs up all locations to Cloudflare R2
- Applies retention policy and prunes old snapshots
- Encrypted Credentials: R2 access credentials stored as Docker secrets, encrypted with SOPS
- Read-Only Mounts: Backup directories mounted read-only in the upload container
- Network Isolation: Backup jobs run in isolated cronjob network
- Immutable Storage: R2 provides versioned, immutable object storage
This automated backup system ensures critical homelab data is regularly backed up to cloud storage with a comprehensive retention policy, all managed declaratively through Terraform.
The homelab uses a centralized cron job management system for scheduled tasks in Docker Swarm:
- Image: crazymax/swarm-cronjob
- Purpose: Enables cron-like scheduled job execution in Docker Swarm mode
- Deployment: Runs on manager node with access to Docker socket
- Network: Isolated
cronjob_networkfor all scheduled jobs
The swarm-cronjob hypervisor monitors Docker services with special labels and creates one-time tasks based on cron schedules:
deploy:
replicas: 0 # Service stays dormant
labels:
- "swarm.cronjob.enable=true"
- "swarm.cronjob.schedule=0 */1 * * *" # Cron format
- "swarm.cronjob.skip-running=true" # Skip if previous run still activeCurrent cron jobs managed by the hypervisor:
-
Paperless Document Export (hourly at minute 0)
- Executes document exporter inside Paperless container
- Prepares data for backup
-
Vaultwarden Backup (hourly at minute 0)
- Triggers vault backup inside Vaultwarden container
- Prepares vault data for backup
-
Autorestic Upload (hourly at minute 30)
- Uploads all backup data to Cloudflare R2
- Applies retention policies
- Declarative Scheduling: Cron schedules defined as Docker labels in Compose files
- Swarm-Native: Works with Docker Swarm's orchestration and placement constraints
- Skip Logic: Prevents overlapping job executions with
skip-runningflag - Zero Replicas: Jobs don't consume resources until executed
- Centralized Management: All scheduled tasks visible and manageable through Portainer
This approach provides reliable, container-native scheduled task execution without requiring external cron daemons or additional infrastructure.
The homelab includes Pulse for system monitoring with integrated notification capabilities via Apprise:
- Image:
rcourtman/pulse:latest- Modern system monitoring dashboard - Authentication: Integrated with Authentik SSO via OIDC
- Features:
- Real-time system metrics (CPU, memory, disk, network)
- Service health monitoring
- Alert configuration and management
- Historical data visualization
- Domain: Accessible at
pulse.denyssizomin.com
Pulse is integrated with Apprise API for flexible notification delivery:
- Image:
lscr.io/linuxserver/apprise-api:latest - Purpose: Universal notification gateway supporting 90+ services
- Network Architecture:
- Internal network (
apprise_network) connecting Pulse to Apprise - Separate proxy network for external access if needed
- Internal network (
- Supported Notification Channels:
- Email (SMTP, SendGrid, Mailgun, etc.)
- Messaging (Slack, Discord, Telegram, Matrix, etc.)
- Push Notifications (Pushover, Pushbullet, Gotify, etc.)
- SMS (Twilio, AWS SNS, etc.)
- And 80+ other services
- Apprise Config: Stored in persistent volume (
/config) - Plugins: Custom notification plugins can be added via
/pluginsvolume - Attachments: Support for sending file attachments via
/attachmentsvolume - Timezone: Europe/Amsterdam (consistent with other services)
- System Alerts: CPU/memory/disk threshold warnings
- Service Down Alerts: Notification when monitored services become unavailable
- Backup Notifications: Alert on backup success/failure (future integration)
- Security Events: Authentication failures, suspicious activity alerts
This monitoring setup provides comprehensive visibility into homelab health with flexible, multi-channel alerting capabilities, all accessible through a modern SSO-protected web interface.
The homelab uses Grafana Alloy as the unified observability collector, providing comprehensive metrics and log collection with Grafana Cloud integration:
- Collector: Grafana Alloy - OpenTelemetry-compatible observability collector
- Configuration Management: Grafana Fleet Management for remote pipeline configuration
- Metrics Backend: Grafana Cloud Prometheus
- Logs Backend: Grafana Cloud Loki
- Deployment: Docker Swarm service with Docker socket access for container discovery
Alloy uses Grafana Fleet Management for centralized, remote configuration of collection pipelines:
-
Self Monitoring (
self.alloy)- Exports Alloy's own health metrics (CPU, memory, component status)
- Collects Alloy container logs
- Job label:
integrations/alloy(compatible with Grafana Cloud Alloy Health dashboard)
-
Docker Swarm Logging (
logging_docker.alloy)- Discovers all Docker Swarm services automatically
- Collects container logs from all services (except Alloy itself to avoid duplication)
- Drops logs older than 1 minute to prevent backlog issues
- Labels logs with service name and hostname
-
Traefik Metrics (
traefik_prom.alloy)- Discovers Traefik reverse proxy service
- Scrapes Prometheus metrics from Traefik's metrics endpoint
- Relabels HTTP status codes for grouping (e.g.,
200→2**) - Provides request/response metrics, latency, and error rates
-
Base Config (
config.alloy):- Live debugging enabled for real-time troubleshooting
- Remote configuration from Grafana Fleet Management
- Platform attribute:
docker.swarmfor pipeline matching
-
Pipeline Matchers:
self.alloy: Matches all collectors (collector.os=~".*")logging_docker.alloy: Matches Docker platforms (platform=~"^docker.*")traefik_prom.alloy: Matches Docker platforms (platform=~"^docker.*")
- Service Network: Access to internal services for metrics scraping
- Proxy Network: Traefik integration for web UI access at
alloy.denyssizomin.com - Docker Socket: Read-only access for container discovery
- Grafana API Key: Stored as Docker secret, encrypted with SOPS
- Read-Only Docker Socket: Container discovery without modification permissions
- Terraform Lifecycle Management: Secrets rotated on configuration changes
| Type | Source | Destination |
|---|---|---|
| Metrics | Alloy health | Grafana Cloud Prometheus |
| Metrics | Traefik proxy | Grafana Cloud Prometheus |
| Logs | All Docker Swarm services | Grafana Cloud Loki |
| Logs | Alloy container | Grafana Cloud Loki |
This observability stack provides centralized monitoring and logging for the entire homelab infrastructure, with Grafana Cloud handling storage, visualization, and alerting.
.
├── live/ # Live environment configurations
│ ├── root.hcl # Root Terragrunt config with S3 backend
│ ├── config/ # Configuration management
│ │ ├── dns/ # Centralized DNS/FQDN configuration
│ │ └── oidc/ # OIDC provider configurations
│ ├── adguard/ # AdGuard DNS servers
│ │ ├── primary/ # Primary AdGuard Home instance
│ │ └── secondary/ # Secondary AdGuard Home instance
│ ├── infra/ # Infrastructure layer
│ │ ├── providers.hcl # Proxmox provider configuration
│ │ └── vms/ # Virtual machine definitions
│ │ ├── docker-apps/ # VM for Docker applications
│ │ ├── homeassistant/ # Home Assistant VM
│ │ └── workbench/ # Development workbench VM
│ ├── docker/ # Docker infrastructure
│ │ ├── networks/ # Docker networks
│ │ │ └── proxy/ # Reverse proxy network
│ │ ├── images/ # Custom Docker images
│ │ │ └── caddy/ # Custom Caddy image
│ │ └── services/ # Docker services
│ │ └── portainer/ # Portainer service
│ └── portainer/ # Application deployments
│ ├── providers.hcl # Portainer provider configuration
│ ├── admin/ # Portainer admin settings
│ ├── settings/ # Portainer settings
│ ├── authentik/ # SSO & Identity Provider
│ ├── autorestic/ # Automated backup system
│ ├── caddy/ # Reverse proxy & TLS termination
│ ├── cronjob/ # Cron job management (swarm-cronjob)
│ ├── ddns/ # Dynamic DNS updater
│ ├── miniserve/ # Simple file server
│ ├── opengist/ # Code snippet sharing
│ ├── paperless/ # Document management system
│ ├── pulse/ # System monitoring with Apprise
│ └── vaultwarden/ # Password manager
├── modules/ # Reusable Terraform modules
│ ├── authentik/ # Authentik configuration modules
│ │ └── oidc_provider/ # OIDC provider module
│ ├── config/ # Configuration modules
│ │ ├── dns/ # DNS configuration module
│ │ └── oidc/ # OIDC configuration
│ ├── docker/ # Docker resource modules
│ ├── infra/ # Infrastructure modules
│ │ ├── cloud-init/ # Cloud-init configuration
│ │ └── vms/ # VM templates
│ │ ├── debian-vm/ # Debian VM module
│ │ └── hass-vm/ # Home Assistant VM module
│ └── portainer/ # Application stack modules
│ ├── admin/ # Admin configuration
│ ├── authentik/ # Authentik stack
│ ├── autorestic/ # Automated backup module
│ ├── caddy/ # Caddy reverse proxy
│ ├── cronjob/ # Cron job management module
│ ├── ddns/ # DDNS client
│ ├── miniserve/ # File server
│ ├── opengist/ # Gist platform
│ ├── paperless/ # Document management
│ ├── pulse/ # Monitoring with Apprise
│ ├── settings/ # Portainer settings
│ └── vaultwarden/ # Password manager module
├── .sops.yaml # SOPS encryption configuration
└── sops.env # Encrypted environment variables
Ensure you have the following tools installed:
- Terraform (>= 1.6)
- Terragrunt (>= 0.50)
- SOPS (for secrets management)
- age (for SOPS encryption)
- SSH key pair for Proxmox access (
~/.ssh/homelab)
-
Clone the repository:
git clone <repository-url> cd homelab-iac
-
Configure secrets:
Create/update
sops.envwith required credentials:# Decrypt (if exists) sops sops.env # Add required variables: # - PROXMOX_ENDPOINT # - PROXMOX_API_TOKEN # - PORTAINER_API_KEY # - S3 backend credentials (if using)
-
Source environment variables:
source <(sops -d sops.env)
-
Initialize the infrastructure:
# From the project root cd live/infra/vms/docker-apps terragrunt init
Deploy a specific module:
cd live/portainer/authentik
terragrunt applyDeploy all modules in a directory:
cd live/portainer
terragrunt run-all applyPlan changes before applying:
terragrunt planDestroy resources:
terragrunt destroyEncrypt a new file:
sops -e file.yaml > encrypted.yamlEdit encrypted file:
sops file.yamlDecrypt and view:
sops -d file.yamlThe infrastructure follows this deployment order:
- Proxmox VMs (
live/infra/vms/*) - Docker Infrastructure (
live/docker/*) - Portainer Service (
live/docker/services/portainer) - Application Stacks (
live/portainer/*)
Terragrunt automatically handles dependencies between modules using dependency blocks.
| Application | Description | Module Path |
|---|---|---|
| AdGuard Home (Primary) | Network-wide DNS & Ad Blocking | adguard/primary |
| AdGuard Home (Secondary) | Redundant DNS Server | adguard/secondary |
| Authentik | Identity Provider & SSO | portainer/authentik |
| Caddy | Reverse Proxy & TLS | portainer/caddy |
| Vaultwarden | Password Manager (SSO-only) | portainer/vaultwarden |
| Paperless-ngx | Document Management | portainer/paperless |
| OpenGist | Code Snippet Sharing | portainer/opengist |
| Miniserve | Simple File Server | portainer/miniserve |
| Pulse | System Monitoring | portainer/pulse |
| Grafana Alloy | Observability Collector | portainer/alloy |
| DDNS | Dynamic DNS Client | portainer/ddns |
| Home Assistant | Home Automation | infra/vms/homeassistant |
- Secrets Management: All sensitive data is encrypted using SOPS with age encryption
- API Keys: Stored in encrypted
sops.envand passed via environment variables - SSH Keys: Used for Proxmox authentication (
~/.ssh/homelab) - State Backend: Terraform state is stored remotely in S3-compatible storage
- TLS: Caddy handles automatic certificate provisioning and renewal
Your age public key is configured in .sops.yaml. Keep your private age key secure:
# Default location
~/.config/sops/age/keys.txtThis setup is designed for GitOps-style deployments:
- Make changes to configuration files
- Commit and push to version control
- Run
terragrunt applyin the relevant directory - Changes are automatically propagated to the infrastructure
This project is provided as-is for educational and personal use.