The 512GB Architect#
I have a 7-year-old laptop. 512GB of storage. Not enough cores to split into VMs without everything grinding to a halt. And yet, I wanted to run multiple web applications, a reverse proxy, an authentication server, a monitoring stack, and a database — all on this one machine. The solution? Containers. Lots of containers.
This is the first post in a series where I walk through the infrastructure I built from scratch. We’re starting at the foundation: Docker.
VMs Are Not the Answer (For This)#
Before I get the “just use VMs” crowd, let me explain. I’ve worked with both. As a systems engineer, I spent a lot of time with virtual machines — Active Directory, Exchange servers, the works. VMs are great when you need full OS-level isolation, when you’re running things that expect to be the only tenant on their hardware.
But for my use case? I’m running small applications with short lifecycles. Things that get rebuilt and redeployed frequently. I don’t need to boot an entire operating system for a Node.js app that serves a web page. What I need is process-level isolation, shared resources, and the ability to spin things up and tear them down fast.
| VMs | Containers |
|---|---|
| Full OS per instance | Shared OS kernel |
| Higher resource overhead | Lightweight — just the app and its deps |
| Boot times in minutes | Boot times in seconds |
| OS-level isolation | Process-level isolation |
| Scale vertically | Scale horizontally |
Containers win here. Not universally — but for what I’m building, it’s not even close.
The Master Plan#
Here’s what I wanted to set up:
Internet → Traefik (reverse proxy, ports 80/443)
├── App 1 (frontend)
├── App 2 (another frontend)
└── Grafana (dashboards)
↕
Backend API → PostgreSQL
Prometheus → scrapes everything
Loki → collects logs
Two Docker networks keep things separated:
- Frontend network: anything Traefik exposes to the internet (web apps, dashboards)
- Backend network: databases, monitoring, log aggregation — nothing publicly accessible
This separation means my database container can’t be reached from the internet even if someone compromises a frontend service. Is it bulletproof? No. Is it better than putting everything on the same network? Absolutely.
The Infrastructure Stack#
Before deploying any actual applications, I needed the plumbing. Here’s what runs the show:
| Service | What It Does | Why I Need It |
|---|---|---|
| Traefik | Reverse proxy, load balancer, auto-HTTPS | Routes traffic by domain, handles TLS certificates |
| Authentik | SSO with 2FA | Secure login for all infrastructure services |
| Portainer | Docker management UI | Visual overview of containers, logs, resources |
| CrowdSec | Rate limiting, IP bans | Security monitoring (still a work in progress) |
| Cloudflared | Cloudflare tunnel | Exposes services without opening ports on my router |
The beautiful thing about Traefik is that you never touch its config after the initial setup. New service? Add Docker labels to your compose file. Traefik picks it up automatically:
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp-sec.entrypoints=https"
- "traefik.http.routers.myapp-sec.rule=Host(`myapp.example.com`)"
- "traefik.http.routers.myapp-sec.tls=true"
- "traefik.http.services.myapp.loadbalancer.server.port=3000"
- "traefik.docker.network=frontend"
That’s it. No nginx config files to edit, no restart required. Traefik watches the Docker socket and updates its routing automatically. After coming from the world of editing nginx configs and running nginx -s reload for every change, this felt like magic.
The Authentication Layer#
Every infrastructure service (Portainer, Traefik dashboard, Grafana) sits behind Authentik. Instead of managing separate logins for each service, I have one SSO provider with 2FA enabled. Log in once, access everything. It’s the same concept as “Sign in with Google” but self-hosted and under my control.
Authentik itself runs as four containers — PostgreSQL for persistence, Redis for caching, the server for the API/UI, and a worker for async tasks. It sits on both the frontend and backend networks since it needs to be accessible via Traefik but also needs access to its own database.
Extremely Overkill? Yes.#
Do I need a reverse proxy, SSO, monitoring, and log aggregation for a few personal projects on an old laptop? No. Absolutely not. This is the definition of over-engineering for a homelab.
But here’s the thing — at work, we didn’t have any of this set up either. No reverse proxy, no monitoring stack, no centralized authentication. We needed to learn how to configure and build it ourselves. This homelab became my sandbox for figuring things out before bringing them to the team. Every mistake I made here (and there were many) was one I didn’t have to make in production. The laptop is cheap to break. Production is not.
And honestly? It’s fun. The laptop hasn’t caught fire yet (though it does get warm). The next post will cover how I actually build and deploy applications to this setup — Dockerfiles, multi-stage builds, and a CI/CD pipeline that runs on the same laptop. Because why not.
