These questions cover the 10 sections of the HTTP Connections & Real-Time Communication overview and are organized into 5 topic clusters. No answers are provided — the goal is to test your understanding of HTTP's request-response constraint, the four workaround techniques (polling, long polling, SSE, WebSockets), and the production realities of real-time web communication.
The questions mix conceptual understanding, technical reasoning, and practical decision-making.
Cluster 1: The Problem & Motivation (Sections 1–2)
The overview states HTTP is request-response only and "the server has no channel back to the client." Explain this constraint using the postal mail analogy. Why was this fine for HTTP's original purpose but insufficient for modern web applications?
The overview presents eight scenarios where request-response fails (chat, live scores, notifications, collaborative editing, progress indicators, multiplayer games, IoT dashboards, financial trading). For three scenarios of your choice, explain what data the server has that the client doesn't know about, and why waiting for the client to ask introduces unacceptable delay.
The overview warns that "a weather dashboard that updates every 5 minutes doesn't need WebSockets." Identify three scenarios from the table where real-time push is genuinely necessary and three where periodic request-response would be acceptable. What threshold distinguishes them?
The overview says "every technique on this page is a workaround for this single architectural constraint." What is that constraint? Why did HTTP's designers not build push capability into the original protocol? How does the Foundations Overview's client-server model discussion relate?
The companion page (State Management) covers statelessness while this page covers "connectionlessness." Explain the difference. Why does the overview say these are "the two constraints that define the challenges of building interactive applications on top of HTTP"?
Cluster 2: Polling & Long Polling (Sections 3–4)
Walk through the polling diagram: the client sends GET /updates every 5 seconds, gets "no change" twice, then "new data" on the third poll. Calculate the average and worst-case latency. Explain the tradeoff between latency and server load when reducing the polling interval.
The overview warns: "10,000 users polling every 5 seconds = 2,000 requests/second of pure overhead." Derive this calculation. Then calculate the request rate for 50,000 users polling every 2 seconds. Why doesn't polling scale for real-time features with large user bases?
Compare the polling and long polling diagrams. In long polling, the server "holds the connection open" until it has data or a timeout expires. How does this eliminate wasted requests? What resource does the server consume while holding connections open?
The overview describes three limitations of long polling: each client holds an open connection, proxy timeouts (60–120s), and the "thundering herd" problem. Explain each. Why is the thundering herd particularly problematic for a popular event like a sports score update?
The overview notes long polling "powered early AJAX chat, the Comet pattern, and Facebook's original chat (circa 2008)." Why was it dominant from 2005–2012? What changed in browser support that made SSE and WebSockets viable alternatives?
Cluster 3: SSE & WebSockets (Sections 5–6)
Explain how SSE works at the protocol level: what Content-Type does the server send, what happens to the HTTP connection, and how does the EventSource API handle incoming data? Why does the overview call SSE "just HTTP with a response that never ends"?
The SSE message format has four fields: data:, event:, id:, retry:. Explain each. How do id: and the Last-Event-ID header provide built-in reconnection and resumption? Why is this a significant advantage over WebSockets?
The overview warns about the "6-connection-per-domain limit on HTTP/1.1." What happens if a user opens your site in 6 tabs with SSE? How does HTTP/2 solve this? Why does the overview suggest a dedicated subdomain as an HTTP/1.1 workaround?
Describe the WebSocket handshake from the diagram: HTTP GET with Upgrade: websocket, 101 Switching Protocols response, then bidirectional frames. What are the four frame types? Why does the overview emphasize that after the upgrade "it is no longer HTTP"?
The overview says WebSockets are "the only browser API for true bidirectional communication" but warns they are "stateful" and "harder to scale." When do you need bidirectional communication (two examples)? Why does statefulness create scaling problems (sticky sessions, pub/sub bus)?
Cluster 4: Choosing & Comparing (Section 7)
Using the overview's 12-row comparison table, compare all four techniques across direction, latency, and proxy/firewall compatibility. Why does polling "work everywhere" while WebSockets "may be blocked"?
Explain why SSE has "very low" client complexity (EventSource API) while WebSockets have "moderate" complexity. Then explain why WebSocket server complexity is "high" while polling server complexity is "very low." What is the relationship between capability and complexity?
Walk through the decision tree: Need bidirectional? → WebSockets. Server needs to push? → SSE. Data changes frequently (<30s)? → Long Polling. Else → Polling. Apply this to: a chat app, a notification badge, a sports dashboard, and a stock trading platform.
The overview says "don't use WebSockets for a notification badge that updates every few minutes." Give two more examples where developers might reach for WebSockets when polling or SSE would suffice. Why is the simpler solution better?
The comparison table shows SSE has "built-in" auto-reconnect while WebSockets require "manual" logic. It shows SSE uses a "single persistent HTTP connection" while long polling is "held open, then re-established." Explain the practical reliability implications.
Cluster 5: Production Realities & the Future (Sections 8–10)
The proxy/firewall table shows SSE "may buffer/break" through corporate proxies while WebSockets "may block upgrade." Explain the technical reason for each. How does HTTPS mitigate both?
The overview identifies three mobile challenges: NAT timeout (30–60s), network switching (WiFi-to-cellular), and battery drain. For each, explain the mechanism and recommended solution. Why does the overview recommend heartbeat pings every 15–30 seconds?
Socket.IO "tries WebSocket first, detects failure, and falls back to long polling transparently." Why is this fallback important? What does "the application code doesn't change" mean architecturally? Explain "always have a degradation path" using the corporate network scenario.
The overview describes WebTransport, HTTP/2+3, GraphQL Subscriptions, and gRPC Streaming. For each, identify the specific WebSocket limitation it addresses. Why does the overview say "all are abstractions over the same core idea"?
The summary states connections solves "server can't reach the client" while State Management solves "server forgets." How do statelessness and connectionlessness work together to define real-time web app challenges? Why must a developer understand both to build a production chat application?