The model
When you load RootResume in your browser, the server calls docker create and spins up a fresh Alpine Linux container. That container persists for the duration of your session (up to 60 minutes) and then gets auto-removed. No state is shared between sessions. Every visitor gets a clean slate.
Resource cost
An idle Alpine container with GCC and Python installed consumes about 4-6MB of RAM and essentially zero CPU. That's because Alpine Linux is designed to be minimal — the base image is 6MB on disk. At 100 concurrent visitors, that's under 600MB of RAM just for container overhead. The GCC compilations and Python executions spike CPU briefly (under 1 second each), but since they're short-lived and containerized, contention is minimal at portfolio traffic levels.
Security posture
The container runs with:
- No network interface — can't make outbound connections, can't scan your LAN
- Read-only rootfs except /tmp (capped at 50MB)
- Non-root user inside the container
- Memory cap: 128MB — can't exhaust the host
- CPU shares throttled so one session can't starve others
- No privileged mode, no extra capabilities
This is defense-in-depth. Any single escape attempt has multiple layers to get through.
Container lifecycle management
Every container gets a UUID session ID. The backend stores a map of sessionId → containerId. A background job runs every 5 minutes and calls docker ps to find containers older than 60 minutes and removes them. If a visitor closes the tab without finishing their session, the container orphans and gets cleaned up on the next sweep.
Scaling limits
At portfolio traffic (~50-200 concurrent visitors), this is fine. At production SaaS scale you'd want a container pool (pre-warm N containers and hand them to users on demand), a container orchestrator (Kubernetes with resource quotas), and a distributed session store. But for a portfolio? The current architecture is intentionally over-engineered for learning and intentionally simple for maintenance.