HTTP/1.1 — The Foundation
HTTP/1.1 is the text-based, request-response protocol that defined the web — simple, debuggable, and still everywhere.
The Problem
How to create a stateless, text-based protocol flexible enough to power everything from static pages to real-time APIs, while remaining debuggable with basic tools.
Mental Model
Like ordering at a counter - one order at a time, wait for it, then order again
Architecture Diagram
How It Works
HTTP/1.1 is a text-based, stateless, request-response protocol running over TCP. Every interaction follows the same rhythm: the client sends a request, the server processes it, and sends back a response. That simplicity is both its greatest strength and its biggest limitation.
A request consists of three parts: a request line (GET /api/users HTTP/1.1), headers (key-value pairs like Host, Accept, Authorization), and an optional body (for POST/PUT/PATCH). The response mirrors this with a status line (HTTP/1.1 200 OK), headers, and a body.
Methods Matter
The HTTP methods aren't just labels — they carry semantic meaning that caches, proxies, and browsers rely on:
- GET — Safe, idempotent. Cacheable by default. Never use it to modify state.
- POST — Not idempotent. Creates resources. Request bodies expected.
- PUT — Idempotent. Replaces the entire resource at the URI.
- PATCH — Partially updates a resource. Not guaranteed idempotent.
- DELETE — Idempotent. Removes the resource.
- HEAD — Identical to GET but returns only headers. Useful for checking if a resource exists or has changed.
- OPTIONS — Returns allowed methods. CORS preflight requests use this.
# A well-formed HTTP/1.1 request
curl -v -X GET https://api.example.com/users/42 \
-H "Accept: application/json" \
-H "Authorization: Bearer eyJhbG..."
Status Codes — The Server's Vocabulary
Status codes are grouped by category, and understanding them saves hours of debugging:
| Range | Meaning | Common Codes |
|---|---|---|
| 1xx | Informational | 100 Continue, 101 Switching Protocols |
| 2xx | Success | 200 OK, 201 Created, 204 No Content |
| 3xx | Redirection | 301 Permanent, 302 Found, 304 Not Modified |
| 4xx | Client Error | 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 429 Too Many Requests |
| 5xx | Server Error | 500 Internal Error, 502 Bad Gateway, 503 Service Unavailable, 504 Gateway Timeout |
The difference between 401 and 403 trips up even experienced engineers. 401 means "the server does not know the client's identity — authenticate first." 403 means "the server knows the client's identity, but access is not permitted."
Caching — The Most Impactful Performance Feature
HTTP caching is the single biggest performance optimization available, and most teams underuse it. The caching system has two complementary mechanisms:
Freshness — How long can a cached response be used without checking the server?
Cache-Control: max-age=3600, public
Validation — Is the cached response still valid?
ETag: "abc123"
If-None-Match: "abc123" → Server responds 304 if unchanged
The Cache-Control header is the primary mechanism. Key directives:
max-age=N— Cache for N secondsno-cache— Always revalidate (does NOT mean "don't cache")no-store— Never cache at allpublic— CDNs and shared caches can store itprivate— Only the browser can cache it
A common mistake: no-cache doesn't prevent caching. It forces revalidation on every use. To prevent caching entirely, use no-store.
Persistent Connections and Pipelining
Before HTTP/1.1, every request required a new TCP connection — a three-way handshake for each image, script, and stylesheet. HTTP/1.1 changed this with persistent connections (keep-alive), allowing multiple requests over one TCP connection.
Pipelining was the next logical step: send multiple requests without waiting for responses. On paper, brilliant. In practice, a disaster. The protocol requires responses to arrive in the same order as requests. If the first response is slow, everything behind it is blocked — head-of-line blocking. Most browsers never enabled pipelining by default, and proxies frequently broke it.
# Without keep-alive (HTTP/1.0 default):
[TCP handshake] → [Request 1] → [Response 1] → [TCP close]
[TCP handshake] → [Request 2] → [Response 2] → [TCP close]
# With keep-alive (HTTP/1.1 default):
[TCP handshake] → [Request 1] → [Response 1] → [Request 2] → [Response 2] → ... → [TCP close]
Browsers worked around pipelining's failure by opening 6 parallel TCP connections per origin. This is the origin of "domain sharding" — serving assets from multiple subdomains to multiply the connection pool. HTTP/2 made this hack obsolete.
Chunked Transfer Encoding
When the server doesn't know the total response size upfront (streaming, server-generated content), it uses chunked transfer encoding:
Transfer-Encoding: chunked
4\r\n
Wiki\r\n
6\r\n
pedia \r\n
0\r\n
\r\n
Each chunk is prefixed with its size in hexadecimal. A zero-length chunk signals the end. This is how servers can start sending data before the full response is generated — critical for time-to-first-byte optimization.
Why HTTP/1.1 Still Matters
It might seem like HTTP/1.1 is obsolete. It's not. Understanding it is essential because:
- HTTP/2 and HTTP/3 preserve HTTP/1.1 semantics — the methods, headers, and status codes are identical
- Debugging is easier — it's possible to literally telnet to port 80 and type a request
- Many internal services still use it — behind a load balancer, the extra complexity of HTTP/2 often isn't worth it
- Every HTTP library defaults to HTTP/1.1 semantics — even when the transport is HTTP/2
The protocol's simplicity is a feature. When something goes wrong, being able to read the raw bytes in Wireshark or curl -v is invaluable. That's why HTTP/1.1 isn't just history — it's the mental model needed to understand everything that came after it.
Key Points
- •HTTP/1.1 defaults to persistent connections (keep-alive) — closing is the exception, not the rule
- •Pipelining was specified but never reliably implemented due to head-of-line blocking at the response level
- •Chunked transfer encoding allows servers to stream responses of unknown length
- •Conditional requests (If-None-Match, If-Modified-Since) save bandwidth by returning 304 Not Modified
- •The Host header is mandatory in HTTP/1.1, enabling virtual hosting on a single IP address
Key Components
| Component | Role |
|---|---|
| Request Line | Specifies the method, URI, and HTTP version for each request |
| Headers | Key-value metadata controlling caching, content type, authentication, and connection behavior |
| Status Codes | Three-digit codes indicating the result of the server's attempt to process the request |
| Message Body | Optional payload carrying the resource representation or form data |
| Persistent Connection | Keep-alive mechanism that reuses TCP connections across multiple request-response cycles |
When to Use
Use HTTP/1.1 when maximum compatibility is needed, simple debugging with curl/telnet is a priority, or when clients don't support HTTP/2. It's still the right choice for internal services behind a reverse proxy that handles protocol upgrades at the edge.
Tool Comparison
| Tool | Type | Best For | Scale |
|---|---|---|---|
| curl | Open Source | CLI-based HTTP debugging and scripting | Single requests to millions via scripting |
| Postman | Commercial | Interactive API exploration and team collaboration | Individual developer to large teams |
| Apache HTTP Server | Open Source | Traditional web serving with rich module ecosystem | Small sites to enterprise deployments |
| Nginx | Open Source | High-performance reverse proxy and static file serving | Thousands to millions of concurrent connections |
Debug Checklist
- Check response status code — is it 2xx, 3xx, 4xx, or 5xx?
- Inspect Cache-Control and ETag headers — is stale content being served?
- Verify Content-Type matches what the client expects (application/json vs text/html)
- Use curl -v to see the full request/response headers including connection reuse
- Check for Connection: close headers that might be killing keep-alive
Common Mistakes
- Ignoring Cache-Control and relying solely on ETag — they serve different purposes and work best together
- Assuming HTTP pipelining works in practice — most browsers disabled it due to buggy proxies
- Not setting Content-Length or using chunked encoding, causing clients to hang waiting for data
- Confusing 401 Unauthorized (needs authentication) with 403 Forbidden (authenticated but not authorized)
- Opening too many parallel connections to the same origin, triggering server-side rate limits
Real World Usage
- •Every browser-to-server interaction on the web starts with HTTP/1.1 or its successors
- •REST APIs overwhelmingly use HTTP/1.1 semantics even when transported over HTTP/2
- •CDNs use HTTP caching headers to decide what to cache at edge nodes
- •Health check endpoints in load balancers rely on simple HTTP GET + status code checks
- •Webhook systems deliver event payloads via HTTP POST with JSON bodies