HTTP Connections & Real-Time Communication

When the Server Needs to Talk Back

HTTP is a request-response protocol. The client asks, the server answers. But what happens when the server has new data and the client hasn't asked? The server has no way to reach the client — it must sit and wait.

The State Management Overview covers how HTTP forgets (statelessness). This page covers the other half of the problem: HTTP can't initiate (connectionlessness). The server cannot push data to a client that isn't actively asking for it.

Every technique on this page — polling, long polling, Server-Sent Events, WebSockets — is a workaround for this single architectural constraint.

1. The Request-Response Constraint

HTTP's fundamental design is simple: the client sends a request, the server sends a response, and that's it. The server has no channel back to the client. It cannot say "hey, something changed" — it must wait until the client asks.

This was perfectly fine for HTTP's original purpose: fetching documents. A user clicks a link, the browser requests a page, the server returns it. There's no need for the server to reach out because the user drives every interaction.

But modern web applications aren't document viewers. They're chat apps, dashboards, collaborative editors, and live feeds. In these applications, the server frequently has new data that the client doesn't know about.

Client Server │ │ │──── GET /messages ───────────────────────>│ │<──── 200 OK (3 messages) ────────────────│ │ │ │ ... 10 seconds pass ... │ │ │ ← New message arrives │ │ from another user! │ │ │ Client has no idea. │ ← Server has no way │ Still showing 3 messages. │ to deliver it. │ │ │ Only when client asks again: │ │──── GET /messages ───────────────────────>│ │<──── 200 OK (4 messages) ────────────────│ │ │

Think of it like postal mail vs. a telephone. A telephone is a persistent, bidirectional connection — either side can speak at any time. HTTP is like postal mail: you send a letter (request), wait for a reply (response), and the post office has no way to deliver a letter you didn't ask for.

HTTP was designed for fetching documents, not for interactive applications. The server literally cannot send data to the client unless the client asks first. Every technique on this page is a workaround for this single constraint. See the Foundations Overview for more on the client-server model and "Polling vs Pushing."

2. When You Need More Than Request-Response

Request-response works fine when the user drives every interaction. It breaks down when the server has time-sensitive data that the client needs now.

Scenario Why Request-Response Fails What's Needed
Chat / messaging Messages arrive at server from other users; your client doesn't know Instant delivery of incoming messages
Live scores / tickers Scores change continuously; client shows stale data Continuous stream of updates
Notifications Server-side events (new email, friend request) happen unpredictably Push when event occurs
Collaborative editing Multiple users change the same document simultaneously Bidirectional, real-time sync
Progress indicators Server-side task running (file conversion, deploy); client wants updates Server pushes progress events
Multiplayer games Other players' actions must appear immediately Low-latency bidirectional communication
IoT / dashboards Sensor data streams in continuously Continuous data feed
Financial trading Sub-second price changes; stale data costs money Lowest possible latency updates

The common thread: the server knows something the client doesn't, and waiting for the client to ask introduces unacceptable delay.

If your data changes less often than every 30 seconds and users can tolerate stale data, plain request-response with a refresh button may be all you need. Don't over-engineer. A weather dashboard that updates every 5 minutes doesn't need WebSockets.

3. Polling

The simplest workaround: have the client ask repeatedly on a timer. If the server has new data, great. If not, it responds with "no change" and the client asks again later.

Client Server │ │ │──── GET /updates ────────────────────────>│ │<──── 200 OK (no change) ─────────────────│ │ │ │ ... wait 5 seconds ... │ │ │ │──── GET /updates ────────────────────────>│ │<──── 200 OK (no change) ─────────────────│ │ │ │ ... wait 5 seconds ... │ │ │ ← New data arrives! │──── GET /updates ────────────────────────>│ │<──── 200 OK (new data!) ─────────────────│ │ │ │ Latency: 0 to 5 seconds │ │ (client might ask just before or │ │ just after the data arrives) │
// Client-side polling with setInterval + fetch
async function pollForUpdates() {
    try {
        const response = await fetch('/api/updates?since=' + lastTimestamp);
        const data = await response.json();
        if (data.updates.length > 0) {
            renderUpdates(data.updates);
            lastTimestamp = data.timestamp;
        }
    } catch (err) {
        console.error('Poll failed:', err);
    }
}

// Poll every 5 seconds
let lastTimestamp = Date.now();
setInterval(pollForUpdates, 5000);

Tradeoffs

When polling is fine: Dashboards refreshing every 30–60 seconds, status checks for long-running jobs, low-user-count internal tools, or any scenario where a few seconds of staleness is acceptable.

Polling math matters. 10,000 users polling every 5 seconds = 2,000 requests/second of pure overhead, most returning "no change." This is why polling doesn't scale for real-time features with large user bases.

4. Long Polling

Long polling inverts the approach: instead of the server responding immediately with "no change," it holds the connection open until it has something to say (or a timeout expires). The client gets near-instant notification when events occur.

Client Server │ │ │──── GET /updates (long poll) ────────────>│ │ │ Server holds connection │ ... waiting ... │ open. No response yet. │ │ │ │ ← Event occurs! │<──── 200 OK (new data!) ─────────────────│ Server responds immediately │ │ │──── GET /updates (re-poll) ──────────────>│ Client re-requests instantly │ │ Server holds again... │ ... waiting ... │ │ │ │ timeout! ─│ │<──── 200 OK (no change, timeout) ────────│ After 30-60s, respond anyway │ │ │──── GET /updates (re-poll) ──────────────>│ Client re-requests │ │

The key difference from regular polling: when an event occurs, the client learns about it almost immediately because the server is already holding an open connection waiting to respond. There are fewer wasted "no change" responses because the server only responds when it has data (or on timeout).

// Client-side long polling with recursive async function
async function longPoll() {
    try {
        const response = await fetch('/api/updates?since=' + lastTimestamp, {
            // Server will hold this for up to 30 seconds
            signal: AbortSignal.timeout(35000)
        });
        const data = await response.json();
        if (data.updates.length > 0) {
            renderUpdates(data.updates);
            lastTimestamp = data.timestamp;
        }
    } catch (err) {
        // Wait a bit before retrying on error
        await new Promise(resolve => setTimeout(resolve, 2000));
    }
    // Immediately re-poll
    longPoll();
}

longPoll(); // Start the loop
// Server-side concept (pseudocode)
app.get('/api/updates', async (req, res) => {
    const since = req.query.since;
    const timeout = 30000; // 30 seconds

    // Check immediately
    let updates = await getUpdatesSince(since);
    if (updates.length > 0) {
        return res.json({ updates, timestamp: Date.now() });
    }

    // Wait for new data or timeout
    const result = await waitForUpdatesOrTimeout(since, timeout);
    res.json(result);
});

Long polling is still just HTTP — it works through firewalls, proxies, and load balancers with no special configuration. Each request is a normal HTTP request; the only difference is the server takes longer to respond.

Limitations

Historical note: Long polling powered early AJAX chat applications, the "Comet" pattern, and Facebook's original chat system (circa 2008). It was the dominant real-time technique before Server-Sent Events and WebSockets gained browser support.

Long polling was the dominant real-time technique from roughly 2005–2012. It's elegant because it's just regular HTTP — no special protocols, no firewall issues. Many production systems still use it as a fallback when WebSockets or SSE aren't available.

5. Server-Sent Events (SSE)

Server-Sent Events is a standardized HTTP streaming protocol: the server pushes events over a single long-lived HTTP connection. Unlike polling and long polling, SSE is a formal standard with built-in browser support via the EventSource API.

The key insight: SSE is just HTTP with Content-Type: text/event-stream and the connection held open. The server writes events to the response body one at a time, and the browser reads them as they arrive. The response never "finishes."

SSE is unidirectional: server → client only. The client still uses regular HTTP requests (via fetch) to send data to the server.

Client Server │ │ │──── GET /events ─────────────────────────>│ │ Accept: text/event-stream │ │ │ │<──── 200 OK ─────────────────────────────│ │ Content-Type: text/event-stream │ │ Connection kept open │ │ │ │<──── data: {"user":"alice","msg":"hi"} │ Event 1 │ │ │<──── data: {"user":"bob","msg":"hello"} │ Event 2 │ │ │ ... minutes pass ... │ │ │ │<──── data: {"user":"alice","msg":"bye"} │ Event 3 │ │ │ Connection stays open indefinitely │ │ Server writes whenever it has data │

Browser API: EventSource

// Client: EventSource — built-in, no library needed
const source = new EventSource('/api/events');

source.onmessage = (event) => {
    const data = JSON.parse(event.data);
    renderMessage(data);
};

// Listen for named event types
source.addEventListener('notification', (event) => {
    showNotification(JSON.parse(event.data));
});

source.onerror = (err) => {
    console.error('SSE connection error:', err);
    // Browser automatically reconnects!
};
// Server: Node.js Express SSE endpoint
app.get('/api/events', (req, res) => {
    // Set SSE headers
    res.writeHead(200, {
        'Content-Type': 'text/event-stream',
        'Cache-Control': 'no-cache',
        'Connection': 'keep-alive'
    });

    // Send an event whenever data changes
    const onNewMessage = (msg) => {
        res.write(`data: ${JSON.stringify(msg)}\n\n`);
    };

    // Subscribe to data changes
    messageEmitter.on('message', onNewMessage);

    // Clean up when client disconnects
    req.on('close', () => {
        messageEmitter.off('message', onNewMessage);
    });
});

SSE Message Format

SSE messages are plain text with a simple line-based format:

Field Purpose Example
data: The event payload (can span multiple lines) data: {"score": 42}
event: Named event type (triggers specific listeners) event: notification
id: Event ID for reconnection (browser sends Last-Event-ID) id: 1001
retry: Reconnection interval in milliseconds retry: 5000

Built-in reconnection: If the connection drops, the browser automatically reconnects and sends the Last-Event-ID header. The server can use this to resume from where it left off. No reconnection logic needed on the client.

SSE is HTTP's built-in push mechanism. No upgrade, no new protocol — just a response that never ends. The browser's EventSource API handles reconnection automatically, including resuming from the last event ID. For server-to-client push, SSE is often all you need.
The 6-connection-per-domain limit on HTTP/1.1 is a real problem for SSE. Browsers allow only 6 simultaneous connections to a single domain over HTTP/1.1. If a user opens your site in 6 tabs, all connections are consumed and no further requests can be made. Use HTTP/2 (which multiplexes many streams over one TCP connection) or serve SSE from a dedicated subdomain.

6. WebSockets

WebSockets are a separate protocol that starts as an HTTP request and then upgrades to a full-duplex, bidirectional communication channel. After the handshake, it is no longer HTTP — it's a different protocol with different framing and semantics.

Client Server │ │ │──── HTTP Request ────────────────────────>│ │ GET /chat HTTP/1.1 │ │ Upgrade: websocket │ │ Connection: Upgrade │ │ Sec-WebSocket-Key: dGhlIHNh... │ │ │ │<──── HTTP Response ──────────────────────│ │ HTTP/1.1 101 Switching Protocols │ │ Upgrade: websocket │ │ Sec-WebSocket-Accept: s3pPL... │ │ │ │ ══════════════════════════════════════ │ │ WebSocket connection established │ │ Full-duplex, bidirectional │ │ ══════════════════════════════════════ │ │ │ │──── "Hello from client" ─────────────────>│ │<──── "Hello from server" ────────────────│ │<──── "New message from Bob" ─────────────│ │──── "Thanks, got it" ───────────────────>│ │ │ │ Either side can send at any time │ │ │ │──── Close frame ─────────────────────────>│ │<──── Close frame ────────────────────────│ │ Connection closed │

After the 101 Switching Protocols response, the connection is no longer HTTP. Data flows as WebSocket frames — lightweight binary envelopes that carry text or binary payloads in either direction.

// Client: WebSocket API — built-in, no library needed
const ws = new WebSocket('wss://example.com/chat');

ws.onopen = () => {
    console.log('Connected');
    ws.send(JSON.stringify({ type: 'join', room: 'general' }));
};

ws.onmessage = (event) => {
    const msg = JSON.parse(event.data);
    renderMessage(msg);
};

ws.onclose = (event) => {
    console.log('Disconnected:', event.code, event.reason);
    // Reconnect logic goes here
};

ws.onerror = (err) => {
    console.error('WebSocket error:', err);
};
// Server: Node.js with 'ws' library
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });

const clients = new Set();

wss.on('connection', (ws) => {
    clients.add(ws);

    ws.on('message', (data) => {
        const msg = JSON.parse(data);
        // Broadcast to all other clients
        for (const client of clients) {
            if (client !== ws && client.readyState === WebSocket.OPEN) {
                client.send(JSON.stringify(msg));
            }
        }
    });

    ws.on('close', () => {
        clients.delete(ws);
    });
});

WebSocket Frame Types

When WebSockets shine: Chat applications, multiplayer games, collaborative editing (Google Docs), live trading platforms — anywhere both sides send data frequently and unpredictably.

Cross-reference: The HTTP Overview covers status code 101 Switching Protocols. The URL Overview covers the wss:// scheme.

WebSockets are the only browser API that provides true bidirectional communication. If your application needs the client AND server to send messages at any time without the other asking, WebSockets are the right tool. For server-to-client only, SSE is simpler.
WebSocket connections are stateful. If the server crashes, every connected client is disconnected and must reconnect. This makes WebSockets harder to deploy behind load balancers (you need sticky sessions or a pub/sub bus like Redis to broadcast across server instances) and harder to scale horizontally compared to stateless HTTP.

7. Choosing the Right Approach

Aspect Polling Long Polling SSE WebSockets
Direction Client → Server Client → Server Server → Client Bidirectional
Protocol HTTP HTTP HTTP WebSocket (after HTTP upgrade)
Connection New request each time Held open, then re-established Single persistent HTTP connection Single persistent WS connection
Latency 0 to N seconds (avg N/2) Near-instant Near-instant Near-instant
Server load High (constant requests) Moderate (held connections) Low (one connection per client) Low (one connection per client)
Bandwidth waste High (HTTP overhead each poll) Moderate (fewer empty responses) Low (lightweight text frames) Very low (minimal frame overhead)
Client complexity Very low Low Very low (EventSource API) Moderate (reconnection, state)
Server complexity Very low Moderate (hold connections) Moderate (streaming response) High (connection management)
Auto-reconnect N/A (new request each time) Manual (client re-polls) Built-in (EventSource) Manual (must implement)
Proxy/firewall Works everywhere Works everywhere Usually works (HTTP) May be blocked (upgrade)
Scalability Poor at high frequency Moderate Good Good (but stateful)
Best for Low-frequency checks, simple dashboards Moderate real-time, fallback Server push: notifications, feeds, progress Chat, games, collaboration, trading

Decision Tree

Need bidirectional communication? (Both client AND server send frequently) │ ├── YES ──→ WebSockets │ (chat, games, collaboration) │ NO │ Server needs to push data to client? │ ├── YES ──→ Server-Sent Events (SSE) │ (notifications, live feeds, progress) │ NO │ Data changes frequently (every <30 seconds)? │ ├── YES ──→ Long Polling │ (near-real-time without special protocol) │ NO ──→ Regular Polling (dashboard refresh, status checks)

Rule of thumb: Start with the simplest approach that meets your requirements. Upgrade when you hit its limits, not before.

The best technique is the simplest one that meets your requirements. Polling is not shameful — it's battle-tested and works everywhere. SSE handles most server-push scenarios elegantly. Reserve WebSockets for true bidirectional needs. Don't use WebSockets for a notification badge that updates every few minutes.

8. Implementation Realities

Tutorials make every technique look clean. Production is messier. Here are the real-world problems that don't appear in code samples.

Proxies and Firewalls

Between your client and server sit proxies, CDNs, load balancers, and firewalls. Each one can interfere with persistent connections.

Technique Corporate Proxy CDN Mobile Carrier Load Balancer
Polling Works Works (cacheable) Works Works
Long Polling Works (may timeout) Works (not cacheable) Works (may timeout) Works
SSE May buffer/break May buffer May buffer Works (needs config)
WebSockets May block upgrade Requires WS support Usually works Needs sticky sessions

Corporate proxies are especially problematic. Many perform SSL inspection (man-in-the-middle), buffer streaming responses, or block the Upgrade header entirely. Some deep packet inspection firewalls strip unfamiliar headers.

Connection Limits and Scaling

Mobile Networks

Mobile networks introduce challenges that don't exist on desktop:

Solutions: Send heartbeat/ping frames every 15–30 seconds to keep connections alive through NATs. Implement auto-reconnect with exponential backoff (wait 1s, then 2s, then 4s, then 8s) to avoid hammering the server after a network change.

Fallback Strategies

Libraries like Socket.IO handle these problems automatically: they try WebSocket first, detect when it fails (proxy blocking the upgrade), and fall back to long polling — all transparently. The application code doesn't change.

Always have a degradation path. If your real-time feature requires WebSockets and WebSockets are blocked, users get nothing. If you fall back to long polling, users get a slower but functional experience.

Corporate networks are where your real-time features go to die. Aggressive proxies, packet inspection firewalls, and connection limits can break SSE and WebSockets. Always test your application behind a corporate proxy and have a polling fallback ready.

9. Beyond WebSockets

WebSockets solved the bidirectional problem, but they're not the final answer. Several newer technologies address WebSocket limitations or provide higher-level abstractions.

WebTransport: Built on HTTP/3 (QUIC). Provides bidirectional communication like WebSockets, but with multiple independent streams and better loss recovery. A dropped packet on one stream doesn't block others (unlike TCP-based WebSockets). Still experimental, but promising for latency-sensitive applications like gaming.

HTTP/2 and HTTP/3 improvements: HTTP/2 multiplexes many streams over a single TCP connection, eliminating the 6-connection-per-domain limit that plagues SSE on HTTP/1.1. HTTP/3 (built on QUIC/UDP) adds 0-RTT connection setup and removes TCP head-of-line blocking. These improvements make SSE much more practical for server push.

GraphQL Subscriptions: A query language feature that uses WebSockets underneath. You write a subscription query (subscription { newMessage { text, author } }), and the server pushes matching data in real-time. It's an abstraction layer — the transport is still WebSockets (or SSE).

gRPC Streaming: Google's RPC framework supports server streaming, client streaming, and bidirectional streaming over HTTP/2. Common in microservice backends where services need to stream data to each other. Less common in browser-to-server communication (though grpc-web exists).

All of these are abstractions over the same core idea: a long-lived connection where the server can send data without being asked. The protocol details differ, but the motivation is identical to what drives polling, SSE, and WebSockets. Understanding the fundamentals on this page prepares you for any of these technologies.

10. Summary

Concept Key Takeaway
The Problem HTTP is request-response only. The server cannot contact the client — it must wait to be asked. Every real-time technique is a workaround for this constraint.
Polling Client asks repeatedly on a timer. Simple and universal, but wastes bandwidth and adds latency. Fine for low-frequency updates.
Long Polling Server holds the connection open until it has data. Near-instant delivery, still just HTTP, but each client holds a connection open on the server.
Server-Sent Events Standardized HTTP streaming. Server pushes events over a single connection. Built-in browser reconnection via EventSource. Unidirectional (server → client).
WebSockets Full-duplex bidirectional protocol. Starts as HTTP, upgrades to WS. The only browser API for true two-way communication. Stateful and harder to scale.
Choosing Start simple. Polling for infrequent checks. SSE for server push. WebSockets only when both sides send data frequently. Don't over-engineer.
Infrastructure Proxies buffer SSE, firewalls block WebSocket upgrades, CDNs need configuration. Polling and long polling work everywhere. Always have a fallback.
Mobile NAT timeouts kill idle connections. Network switching drops everything. Battery suffers from persistent connections. Use heartbeats and exponential backoff.
Future WebTransport, HTTP/3, GraphQL Subscriptions, and gRPC Streaming are all variations on the same theme: letting the server send data without being asked.

This page solves the "server can't reach the client" problem. The State Management Overview solves the parallel problem: "the server forgets." Together, these two constraints — statelessness and connectionlessness — define the challenges of building interactive applications on top of HTTP.