Execution Models

How Servers Run Your Code

"Web servers were built to serve static files. Dynamic content — shopping carts, logins, database queries — requires the server to run code. The question is: how does that code execute?"

CSE 135 — Full Overview | Review Questions

Section 1The Challenge

Web servers serve files — but dynamic content requires running code.

Static vs Dynamic: The "???" Question

Static File Serving: ┌─────────┐ GET /page.html ┌─────────┐ read file ┌──────────┐ │ Browser │ ──────────────────────▶│ Apache │ ────────────────▶│ page.html│ │ │ ◀────────────────────── │ │ ◀──────────────── │ │ └─────────┘ HTML response └─────────┘ file contents └──────────┘ Dynamic Content: ┌─────────┐ GET /cart.php ┌─────────┐ ??? ┌──────────┐ │ Browser │ ──────────────────────▶│ Apache │ ────────────────▶│ PHP Code │ │ │ ◀────────────────────── │ │ ◀──────────────── │ │ └─────────┘ HTML response └─────────┘ HTML output └──────────┘ How does this happen?
Web servers were built for static files. Dynamic content requires an execution model — a mechanism for the server to invoke code, pass it request data, and capture the output as an HTTP response.

Section 2The Execution Models

Four approaches, each with different trade-offs.

Overview of All Four Models

ModelHow It WorksExamplesEra
Fork-Exec CGIServer spawns a new process for each requestPerl, C programs1993
Server ModuleInterpreter embedded in the server processmod_php, mod_perl~1997
Application ServerPersistent process handles many requestsNode.js, Python WSGI, Java Servlets~2009
Serverless / FaaSCloud platform manages executionAWS Lambda, Cloudflare Workers2014
Each model makes different trade-offs between simplicity, performance, isolation, and resource usage. None is universally "best" — they coexist based on requirements.

Section 3Fork-Exec CGI

The original (1993) solution — a new process for every request.

CGI Execution Flow

Request ──▶ Apache ──▶ fork() ──▶ ┌──────────────────────────────────────┐ │ Child Process │ │ exec("/cgi-bin/script.pl") │ │ Read ENV: QUERY_STRING, │ │ REQUEST_METHOD, │ │ CONTENT_LENGTH │ │ Execute script logic │ │ Print headers + body to stdout │ │ exit(0) │ └──────────────────────────────────────┘ │ ▼ Response to client Request 2 ──▶ fork() ──▶ [New process, starts completely fresh] Request 3 ──▶ fork() ──▶ [New process, starts completely fresh]

Pros

  • Complete isolation between requests
  • Any language works (Perl, C, Python, shell)
  • Simple to understand and debug
  • Crash affects only one request

Cons

  • High overhead — fork()+exec() every request
  • No shared state between requests
  • ~100 req/s ceiling under load
  • Interpreter reloads every time
CGI: Complete isolation but high overhead — a new process for every request. The original web programming model.

Section 4Server Module

Embed the interpreter inside the web server — no fork overhead.

mod_php: Embedded Interpreter

┌──────────────────────────────────────────────────────────┐ │ Apache Process │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ mod_php (embedded PHP) │ │ │ │ │ │ │ │ Request 1 ──▶ Parse cart.php ──▶ Execute ──▶ Output│ │ │ │ Request 2 ──▶ Parse user.php ──▶ Execute ──▶ Output│ │ │ │ Request 3 ──▶ Parse cart.php ──▶ Execute ──▶ Output│ │ │ │ │ │ │ └─────────────────────────────────────────────────────┘ │ │ │ │ (Same process handles many requests, no fork overhead) │ └──────────────────────────────────────────────────────────┘

Pros

  • Much faster — no fork per request
  • Opcode caching avoids re-parsing
  • ~1,000s req/s throughput
  • Drop files to deploy (like CGI)

Cons

  • Language tied to server (PHP only in mod_php)
  • Shared memory — security risk on shared hosting
  • Crashes can affect server process
  • Apache loaded with interpreter even for static files
mod_php: No fork overhead, but interpreter is tied to the server. Shared memory creates security risks on shared hosting.

Section 5Application Server

The application IS the server — a persistent process handling all requests.

Nginx + Node.js: Reverse Proxy Pattern

┌─────────┐ ┌─────────┐ ┌─────────────────────────────┐ │ Nginx │ │ Reverse │ │ Node.js Process │ │ (port │ ──────▶ │ Proxy │ ──────▶ │ │ │ 80) │ │ to :3000│ │ const app = express(); │ └─────────┘ └─────────┘ │ app.get('/', (req, res) =>│ │ │ res.send('Hello'); │ │ Handles: │ }); │ │ - SSL/TLS │ app.listen(3000); │ │ - Static files │ │ │ - Gzip compression │ [Runs continuously] │ │ - Rate limiting │ [Handles many requests] │ │ - Load balancing │ [Maintains state in memory]│ └────────────────────── └─────────────────────────────┘

Pros

  • Excellent performance — ~10,000s req/s
  • In-memory state — caching, counters
  • WebSocket support built in
  • Full control over request handling

Cons

  • Process management required (PM2, systemd)
  • One crash kills ALL active connections
  • Needs reverse proxy for production
  • Memory leaks accumulate over time
App servers run continuously and maintain state in memory — but one unhandled exception kills all connections. Use PM2 or cluster mode to mitigate.

Section 6Comparison

Side-by-side trade-offs across all three models.

Comparison Table

AspectCGIModule (mod_php)App Server (Node.js)
Startup per requestFull process (fork+exec)NoneNone
Memory isolationCompletePartialNone
State between requestsImpossibleLimitedEasy (in-memory)
Requests per second~100s~1,000s~10,000s
Crash impactOne requestOne Apache workerALL requests
DeploymentDrop filesDrop filesProcess manager
CGI: ~100 req/s. Module: ~1,000s. App server: ~10,000s. But crash impact is inversely proportional to performance — more sharing means higher throughput but greater blast radius.

Section 7Working Demos

See each execution model in action with live examples.

Demo Links by Language

Perl — Traditional CGI

  • Hello World (HTML & JSON)
  • Environment Variables
  • GET/POST Echo
  • Session Demo

C — Compiled CGI

  • Hello World (HTML & JSON)
  • Environment Variables
  • GET/POST/General Echo

PHP — Server Module (mod_php)

  • Hello World (HTML & JSON)
  • Environment Variables
  • GET/POST Echo
  • Session Demo
See the full overview for live demo links. All demos show the same operations implemented in different languages and execution models.

Section 8Historical Context

The evolution of server-side execution reflects the web's growth.

Timeline: 1993 – 2014

  • 1993 — CGI specification published. Perl becomes the "duct tape of the internet."
  • 1995 — PHP created, initially as CGI scripts, later as Apache module.
  • 1997 — mod_perl brings Perl into Apache's process space.
  • 2009 — Node.js introduces event-driven JavaScript on the server.
  • 2014 — AWS Lambda launches, popularizing serverless / FaaS.
Each model didn't replace the previous one entirely — they coexist. CGI is still used for low-traffic admin scripts. mod_php powers WordPress. Node.js dominates real-time apps. Serverless handles bursty workloads.

SummaryKey Takeaways

8 sections of execution model fundamentals in one table.

Execution Models at a Glance

ConceptKey Points
The ChallengeWeb servers serve static files. Dynamic content requires an execution model to invoke code and capture output.
The ModelsFour approaches: CGI (fork-exec), Server Module (embedded), Application Server (persistent), Serverless (cloud-managed).
CGINew process per request. Complete isolation, any language, but high overhead (~100 req/s). The original 1993 model.
Server ModuleInterpreter inside the server (mod_php). No fork overhead, ~1,000s req/s, but language tied to server and shared memory risks.
Application ServerPersistent process (Node.js). ~10,000s req/s, in-memory state, but crash kills all connections. Needs reverse proxy.
ComparisonPerformance scales up (CGI → Module → App Server) but crash impact scales inversely. More sharing = more throughput but greater blast radius.
DemosPerl and C for CGI, PHP for mod_php — same operations in different languages and execution models.
HistoryCGI (1993) → mod_perl (1997) → Node.js (2009) → Lambda (2014). Each solved the previous model's bottleneck. All coexist.