Back to portfolio
Architecture deep-dive
RootResume OS — Technical breakdown

How this portfolio works

Behind the terminal interface there's a real Linux container running in the cloud. Every command you type executes inside an isolated Alpine environment — this page explains how all the pieces fit together.

Architecture overview

The system follows a stateful session model: when you open the page a session is created on the backend, which spins up a Docker container assigned exclusively to your browser tab. All subsequent commands go to that container and nowhere else.

Browser
Next.js + React
API Gateway
Nginx reverse proxy
Node.js API
Express + SSE
Docker daemon
container per session
Alpine Linux
GCC · Python3 · bash
Isolation
Each session runs in its own container — no shared filesystem or process namespace between users.
Streaming
Visualizations stream frame-by-frame via Server-Sent Events so you see output in real time.
Ephemeral
Containers are destroyed after inactivity. Nothing persists between sessions — by design.

Frontend

Next.js 15 (App Router)React 18TypeScriptTailwind CSS v4Framer MotionGeist Font

The UI is a custom terminal emulator built entirely in React — no xterm.js or third-party shell libraries. It manages command history, tab completion, streaming output buffers, and a code editor (Monaco-style) for the challenge system, all in a single TSX component.

Real-time streaming with EventSource

Visualizations (sorting algorithms, maze generation, fractals) use the browser's native EventSource API to open a persistent SSE connection. Each frame arrives as a base64-encoded chunk and overwrites the last history slot in-place, creating smooth in-terminal animations without any WebSocket overhead.

// Simplified streaming handler
const evtSource = new EventSource(`/api/stream?sessionId=${id}&vizId=maze`);

evtSource.onmessage = (event) => {
  const frame = atob(event.data);   // base64 → text frame
  setHistory(prev => {
    const next = [...prev];
    next[next.length - 1] = { text: frame, type: "output" };
    return next;                     // replace last slot → animated update
  });
};

Layout

Desktop renders a CSS Grid split — left panel (presentation) + vertical separator + right panel (terminal + quick-action buttons). On mobile the terminal collapses into a slide-up drawer triggered by a floating action button. Animations across the whole UI use Framer Motion with optimistic render patterns.

Backend

Node.jsExpressSSEDockerode / docker execAlpine LinuxGCCPython 3

The Express API exposes a small set of endpoints. The most important ones are/exec for standard commands and/stream for visualization output. It is structured around three managers:

SessionManager
Creates and tracks Docker containers (one per browser session) usingdocker exec. Handles session TTL, cleanup on inactivity, and command execution with stdout/stderr capture. Each session is a UUID mapped to a live container ID.
VisualizationManager
Holds the source code for all 9 visualizations (5 in C, 4 in Python) as in-memory strings. On request, it base64-encodes the source, writes it into the container filesystem via echo "$b64" | base64 -d > file.c, compiles it with GCC if needed, and returns the runnable command to the streaming endpoint.
ChallengeManager
Scaffolds coding challenges by writing broken source files into the container. The user edits them via the built-in code editor, andverify compiles and runs the fixed code against a test harness — all inside the container.

SSE streaming pipeline

For visualizations the server spawns the binary/script inside the container and attaches to its stdout stream. Each chunk is base64-encoded and pushed as an SSE event. The response headers set Content-Type: text/event-stream and disable buffering (X-Accel-Buffering: no) so Nginx forwards data immediately without accumulation.

// Simplified /stream endpoint
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("X-Accel-Buffering", "no");   // tell nginx: don't buffer

const cmd = await vizManager.prepareVisualization(sessionId, vizId);
const proc = await sessionManager.spawnStream(sessionId, cmd);

proc.stdout.on("data", (chunk) => {
  const encoded = Buffer.from(chunk).toString("base64");
  res.write(`data: ${encoded}\n\n`);   // SSE format
});

proc.on("close", () => {
  res.write("event: close\ndata: done\n\n");
  res.end();
});

Container isolation & security

Letting strangers run arbitrary code is risky by definition. The mitigation strategy is isolation by design rather than a sandbox within a shared process — actual separate containers.

Network
Containers run with no outbound internet access (--network none). They can't reach external services or exfiltrate data.
Resources
CPU and memory are capped per container. A fork-bomb or memory hog affects only its own container and gets killed on TTL expiry.
Filesystem
Each container starts from a clean Alpine image. There is no shared volume — one user's files are invisible to another's.
Ephemeral TTL
Idle containers are reaped automatically. There's no persistent state between page loads — intentional, to keep the host clean.

Infrastructure & deployment

AWS EC2DockerNginxPM2Let's Encrypt / HTTPS

Both the Next.js frontend and the Node.js API run on a single AWS EC2 instance behind Nginx. Nginx acts as a reverse proxy, routing /api/* to the Node backend and everything else to the Next.js server. PM2 keeps both processes alive and restarts them on crash.

EC2 :443
HTTPS / TLS
Nginx
reverse proxy
:3000
Next.js (PM2)
:4000
Express API (PM2)
Docker
container pool

The Docker daemon runs on the host and the Node.js process has access to the Docker socket. This means the API can create, exec into, and destroy containers without needing a sidecar process — but it also means the API process itself must be trusted (it runs as a dedicated non-root user with socket access, not as root).

Visualization engine

All 9 algorithm visualizations are source code stored as JS strings invisualizationManager.js. Nothing is pre-compiled or cached on disk — every run writes the source fresh, compiles it (for C), and executes it inside the session's container.

C visualizations (GCC)
  • • Bubble Sort — O(n²) compare & swap
  • • Selection Sort — min-scan passes
  • • Quick Sort — pivot partitioning
  • • BFS Pathfinding — queue-based flood fill
  • • DFS Pathfinding — recursive backtracking
# Write → Compile → Execute pipeline
echo "$b64" | base64 -d > bubble.c
gcc bubble.c -o bubble_app
./bubble_app       # stdout → SSE frames
Python visualizations
  • • Conway's Game of Life — B3/S23 rules
  • • Mandelbrot Set — fractal depth chars
  • • Monte Carlo π — probabilistic estimation
  • • Maze Gen + Solver — DFS build / BFS solve
# Write → Execute (no compilation step)
echo "$b64" | base64 -d > maze_app.py
python3 ./maze_app.py  # stdout → SSE frames

Each program prints full frames to stdout. The terminal client receives them via SSE and replaces the last history slot in-place, creating a smooth animation with no DOM thrashing. Pressing Ctrl+C closes the EventSource on the client and sends a kill signal to the container process.

Open source

The full source code is public on GitHub. Feel free to fork it, run it locally with Docker Compose, or adapt it for your own portfolio.

View on GitHub
Built by Luna Lancuba · 2026