How this portfolio works
Behind the terminal interface there's a real Linux container running in the cloud. Every command you type executes inside an isolated Alpine environment — this page explains how all the pieces fit together.
Architecture overview
The system follows a stateful session model: when you open the page a session is created on the backend, which spins up a Docker container assigned exclusively to your browser tab. All subsequent commands go to that container and nowhere else.
Frontend
The UI is a custom terminal emulator built entirely in React — no xterm.js or third-party shell libraries. It manages command history, tab completion, streaming output buffers, and a code editor (Monaco-style) for the challenge system, all in a single TSX component.
Real-time streaming with EventSource
Visualizations (sorting algorithms, maze generation, fractals) use the browser's native EventSource API to open a persistent SSE connection. Each frame arrives as a base64-encoded chunk and overwrites the last history slot in-place, creating smooth in-terminal animations without any WebSocket overhead.
// Simplified streaming handler
const evtSource = new EventSource(`/api/stream?sessionId=${id}&vizId=maze`);
evtSource.onmessage = (event) => {
const frame = atob(event.data); // base64 → text frame
setHistory(prev => {
const next = [...prev];
next[next.length - 1] = { text: frame, type: "output" };
return next; // replace last slot → animated update
});
};Layout
Desktop renders a CSS Grid split — left panel (presentation) + vertical separator + right panel (terminal + quick-action buttons). On mobile the terminal collapses into a slide-up drawer triggered by a floating action button. Animations across the whole UI use Framer Motion with optimistic render patterns.
Backend
The Express API exposes a small set of endpoints. The most important ones are/exec for standard commands and/stream for visualization output. It is structured around three managers:
docker exec. Handles session TTL, cleanup on inactivity, and command execution with stdout/stderr capture. Each session is a UUID mapped to a live container ID.echo "$b64" | base64 -d > file.c, compiles it with GCC if needed, and returns the runnable command to the streaming endpoint.verify compiles and runs the fixed code against a test harness — all inside the container.SSE streaming pipeline
For visualizations the server spawns the binary/script inside the container and attaches to its stdout stream. Each chunk is base64-encoded and pushed as an SSE event. The response headers set Content-Type: text/event-stream and disable buffering (X-Accel-Buffering: no) so Nginx forwards data immediately without accumulation.
// Simplified /stream endpoint
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("X-Accel-Buffering", "no"); // tell nginx: don't buffer
const cmd = await vizManager.prepareVisualization(sessionId, vizId);
const proc = await sessionManager.spawnStream(sessionId, cmd);
proc.stdout.on("data", (chunk) => {
const encoded = Buffer.from(chunk).toString("base64");
res.write(`data: ${encoded}\n\n`); // SSE format
});
proc.on("close", () => {
res.write("event: close\ndata: done\n\n");
res.end();
});Container isolation & security
Letting strangers run arbitrary code is risky by definition. The mitigation strategy is isolation by design rather than a sandbox within a shared process — actual separate containers.
--network none). They can't reach external services or exfiltrate data.Infrastructure & deployment
Both the Next.js frontend and the Node.js API run on a single AWS EC2 instance behind Nginx. Nginx acts as a reverse proxy, routing /api/* to the Node backend and everything else to the Next.js server. PM2 keeps both processes alive and restarts them on crash.
The Docker daemon runs on the host and the Node.js process has access to the Docker socket. This means the API can create, exec into, and destroy containers without needing a sidecar process — but it also means the API process itself must be trusted (it runs as a dedicated non-root user with socket access, not as root).
Visualization engine
All 9 algorithm visualizations are source code stored as JS strings invisualizationManager.js. Nothing is pre-compiled or cached on disk — every run writes the source fresh, compiles it (for C), and executes it inside the session's container.
- • Bubble Sort — O(n²) compare & swap
- • Selection Sort — min-scan passes
- • Quick Sort — pivot partitioning
- • BFS Pathfinding — queue-based flood fill
- • DFS Pathfinding — recursive backtracking
# Write → Compile → Execute pipeline echo "$b64" | base64 -d > bubble.c gcc bubble.c -o bubble_app ./bubble_app # stdout → SSE frames
- • Conway's Game of Life — B3/S23 rules
- • Mandelbrot Set — fractal depth chars
- • Monte Carlo π — probabilistic estimation
- • Maze Gen + Solver — DFS build / BFS solve
# Write → Execute (no compilation step) echo "$b64" | base64 -d > maze_app.py python3 ./maze_app.py # stdout → SSE frames
Each program prints full frames to stdout. The terminal client receives them via SSE and replaces the last history slot in-place, creating a smooth animation with no DOM thrashing. Pressing Ctrl+C closes the EventSource on the client and sends a kill signal to the container process.
Open source
The full source code is public on GitHub. Feel free to fork it, run it locally with Docker Compose, or adapt it for your own portfolio.
View on GitHub