Skip to main content
  1. Posts/

Containerizing Everything

·787 words·4 mins
Photograph By Fabio Sasso
Blog Docker Infrastructure
Table of Contents
Infrastructure From Scratch - This article is part of a series.
Part 1: This Article

The 512GB Architect
#

I have a 7-year-old laptop. 512GB of storage. Not enough cores to split into VMs without everything grinding to a halt. And yet, I wanted to run multiple web applications, a reverse proxy, an authentication server, a monitoring stack, and a database — all on this one machine. The solution? Containers. Lots of containers.

This is the first post in a series where I walk through the infrastructure I built from scratch. We’re starting at the foundation: Docker.

VMs Are Not the Answer (For This)
#

Before I get the “just use VMs” crowd, let me explain. I’ve worked with both. As a systems engineer, I spent a lot of time with virtual machines — Active Directory, Exchange servers, the works. VMs are great when you need full OS-level isolation, when you’re running things that expect to be the only tenant on their hardware.

But for my use case? I’m running small applications with short lifecycles. Things that get rebuilt and redeployed frequently. I don’t need to boot an entire operating system for a Node.js app that serves a web page. What I need is process-level isolation, shared resources, and the ability to spin things up and tear them down fast.

VMsContainers
Full OS per instanceShared OS kernel
Higher resource overheadLightweight — just the app and its deps
Boot times in minutesBoot times in seconds
OS-level isolationProcess-level isolation
Scale verticallyScale horizontally

Containers win here. Not universally — but for what I’m building, it’s not even close.

The Master Plan
#

Here’s what I wanted to set up:

Internet → Traefik (reverse proxy, ports 80/443)
              ├── App 1 (frontend)
              ├── App 2 (another frontend)
              └── Grafana (dashboards)
                        ↕
              Backend API → PostgreSQL
              Prometheus → scrapes everything
              Loki → collects logs

Two Docker networks keep things separated:

  • Frontend network: anything Traefik exposes to the internet (web apps, dashboards)
  • Backend network: databases, monitoring, log aggregation — nothing publicly accessible

This separation means my database container can’t be reached from the internet even if someone compromises a frontend service. Is it bulletproof? No. Is it better than putting everything on the same network? Absolutely.

The Infrastructure Stack
#

Before deploying any actual applications, I needed the plumbing. Here’s what runs the show:

ServiceWhat It DoesWhy I Need It
TraefikReverse proxy, load balancer, auto-HTTPSRoutes traffic by domain, handles TLS certificates
AuthentikSSO with 2FASecure login for all infrastructure services
PortainerDocker management UIVisual overview of containers, logs, resources
CrowdSecRate limiting, IP bansSecurity monitoring (still a work in progress)
CloudflaredCloudflare tunnelExposes services without opening ports on my router

The beautiful thing about Traefik is that you never touch its config after the initial setup. New service? Add Docker labels to your compose file. Traefik picks it up automatically:

labels:
  - "traefik.enable=true"
  - "traefik.http.routers.myapp-sec.entrypoints=https"
  - "traefik.http.routers.myapp-sec.rule=Host(`myapp.example.com`)"
  - "traefik.http.routers.myapp-sec.tls=true"
  - "traefik.http.services.myapp.loadbalancer.server.port=3000"
  - "traefik.docker.network=frontend"

That’s it. No nginx config files to edit, no restart required. Traefik watches the Docker socket and updates its routing automatically. After coming from the world of editing nginx configs and running nginx -s reload for every change, this felt like magic.

The Authentication Layer
#

Every infrastructure service (Portainer, Traefik dashboard, Grafana) sits behind Authentik. Instead of managing separate logins for each service, I have one SSO provider with 2FA enabled. Log in once, access everything. It’s the same concept as “Sign in with Google” but self-hosted and under my control.

Authentik itself runs as four containers — PostgreSQL for persistence, Redis for caching, the server for the API/UI, and a worker for async tasks. It sits on both the frontend and backend networks since it needs to be accessible via Traefik but also needs access to its own database.

Extremely Overkill? Yes.
#

Do I need a reverse proxy, SSO, monitoring, and log aggregation for a few personal projects on an old laptop? No. Absolutely not. This is the definition of over-engineering for a homelab.

But here’s the thing — at work, we didn’t have any of this set up either. No reverse proxy, no monitoring stack, no centralized authentication. We needed to learn how to configure and build it ourselves. This homelab became my sandbox for figuring things out before bringing them to the team. Every mistake I made here (and there were many) was one I didn’t have to make in production. The laptop is cheap to break. Production is not.

And honestly? It’s fun. The laptop hasn’t caught fire yet (though it does get warm). The next post will cover how I actually build and deploy applications to this setup — Dockerfiles, multi-stage builds, and a CI/CD pipeline that runs on the same laptop. Because why not.

Aaron Yong
Author
Aaron Yong
Building things for the web. Writing about development, Linux, cloud, and everything in between.
Infrastructure From Scratch - This article is part of a series.
Part 1: This Article

Related

Building With Hugo
·1204 words·6 mins
Photograph By Nick Morrison
Blog Hugo Web Development
How I built this website with Hugo and the Blowfish theme
Things I Changed My Mind On
·1353 words·7 mins
Photograph By ThisisEngineering - Unsplash
Blog Software Engineering
Opinions that didn’t survive contact with production
Living in the Terminal
·1145 words·6 mins
Photograph By Oleksandr Chumak
Blog Terminal Productivity
How TUIs replaced my GUI tools and accidentally made my workflow more accessible