HTTP/3 — UDP Takes Over
HTTP/3 replaces TCP with QUIC over UDP, giving each stream independent loss recovery and enabling 0-RTT connections.
The Problem
HTTP/2 solved application-layer multiplexing but introduced worse TCP head-of-line blocking. How does a protocol achieve true stream independence while maintaining reliability, encryption, and backward compatibility?
Mental Model
Like having separate delivery trucks per package - one stuck truck doesn't block others
Architecture Diagram
How It Works
HTTP/3 is HTTP semantics (the same methods, headers, and status codes from HTTP/1.1 and HTTP/2) transported over QUIC instead of TCP. That sentence sounds simple, but the implications are profound. QUIC is not just "TCP over UDP" — it's a ground-up redesign of transport-layer reliability, encryption, and multiplexing.
The core insight: TCP treats a connection as a single byte stream. If one packet is lost, everything behind it waits. QUIC treats each stream as independent. If stream 5 loses a packet, streams 1, 3, 7, and 9 continue uninterrupted. This is the head-of-line blocking fix that HTTP/2 couldn't achieve on TCP.
The Protocol Stack Difference
HTTP/1.1 & HTTP/2: HTTP/3:
┌─────────────────┐ ┌─────────────────┐
│ HTTP semantics │ │ HTTP semantics │
├─────────────────┤ ├─────────────────┤
│ TLS 1.2/1.3 │ │ QUIC │
├─────────────────┤ │ (TLS 1.3 built │
│ TCP │ │ into transport)│
├─────────────────┤ ├─────────────────┤
│ IP │ │ UDP │
└─────────────────┘ ├─────────────────┤
│ IP │
└─────────────────┘
Notice that TLS is not a separate layer in HTTP/3. QUIC integrates TLS 1.3 directly — encryption is not optional, it's baked into the protocol. Even the transport headers (packet numbers, connection IDs) are encrypted, making it much harder for middleboxes to interfere.
Connection Setup — From 3-RTT to 0-RTT
TCP + TLS 1.3 requires two round trips to establish a secure connection: one for the TCP handshake, one for TLS. QUIC combines them:
First connection (1-RTT):
Client → Server: QUIC Initial (contains TLS ClientHello)
Server → Client: QUIC Handshake (contains TLS ServerHello + certs)
Client → Server: QUIC Handshake Complete + First Data
Subsequent connections (0-RTT):
Client → Server: QUIC Initial + 0-RTT Data (encrypted with cached key)
Server → Client: Response data
With 0-RTT, the client sends application data in the very first packet. On a 100ms RTT connection, that saves 200-300ms compared to TCP + TLS 1.2. The catch: 0-RTT data can be replayed. An attacker who captures the initial packet can resend it. This means 0-RTT should only carry idempotent requests (GET, not POST that creates a payment).
Connection Migration — The Mobile Killer Feature
TCP connections are identified by the 4-tuple: source IP, source port, destination IP, destination port. When a phone switches from WiFi to cellular, the IP address changes, and every TCP connection dies. The browser has to re-establish connections, re-negotiate TLS, and re-send in-flight requests.
QUIC identifies connections with a connection ID — a random value independent of the network path. When the network changes, the client sends a PATH_CHALLENGE on the new path, the server responds with PATH_RESPONSE, and the connection continues on the new path. No new handshake. No lost requests.
# TCP: WiFi → Cellular = Connection Reset
WiFi IP: 192.168.1.42:54321 → server:443 (connection dies)
Cell IP: 10.0.0.7:61234 → server:443 (new connection, new TLS, re-send everything)
# QUIC: WiFi → Cellular = Seamless Migration
WiFi: ConnectionID=0x1a2b → server:443
Cell: ConnectionID=0x1a2b → server:443 (same connection, PATH_CHALLENGE/RESPONSE)
This is transformative for mobile apps. Users walking between WiFi zones, entering elevators, or riding trains experience seamless connectivity instead of loading spinners.
QPACK — Learning from HPACK's Mistakes
HTTP/2's HPACK compression uses a dynamic table that both sides must agree on. The table is updated sequentially — if the HEADERS frame updating the table is lost or delayed, all subsequent HEADERS frames that reference the new entries are blocked. This is header-compression head-of-line blocking.
QPACK solves this by decoupling the dynamic table updates from the header blocks:
- A dedicated encoder stream sends table updates
- Header blocks reference table entries but include instructions for what to do if an entry isn't available yet
- Implementations can choose to block on missing entries (for better compression) or encode literally (for lower latency)
The result: header compression that's nearly as effective as HPACK without the head-of-line blocking penalty.
Per-Stream Loss Recovery
This is the technical heart of HTTP/3's advantage. In QUIC, each stream has its own sequence numbers and loss recovery:
Stream 1: [Frame 1] [Frame 2] [Frame 3 - LOST] [Frame 4]
Stream 3: [Frame 1] [Frame 2] [Frame 3] [Frame 4] ← not blocked
Stream 5: [Frame 1] [Frame 2] ← not blocked
Only Stream 1 waits for retransmission of Frame 3.
Streams 3 and 5 deliver data to the application immediately.
Compare this to HTTP/2 over TCP, where that same lost packet would block all streams until TCP retransmits it. On lossy networks (mobile, satellite, congested WiFi), this difference is dramatic — Google measured 2-8% latency improvements, with even larger gains on poor connections.
Deployment Reality
HTTP/3 adoption is growing but not universal. Here's the current landscape:
What works well:
- CDN edge termination (Cloudflare, Akamai, Fastly all support it)
- Browser-to-CDN connections (Chrome, Firefox, Safari all support it)
- Mobile apps using direct QUIC libraries
What's still challenging:
- Corporate firewalls blocking UDP 443
- Load balancers that don't understand QUIC
- Backend-to-backend communication (most internal traffic still uses HTTP/2 or HTTP/1.1)
- Debugging tools are less mature than TCP equivalents
The recommended deployment strategy: enable HTTP/3 at the edge (CDN or load balancer), keep HTTP/2 as fallback, and don't bother with HTTP/3 for internal service-to-service traffic yet.
# Check HTTP/3 support
curl --http3 -v https://cloudflare.com 2>&1 | head -20
# Check Alt-Svc header advertising HTTP/3
curl -sI https://google.com | grep -i alt-svc
# alt-svc: h3=":443"; ma=2592000
# DNS HTTPS record (newer discovery mechanism)
dig +short type65 cloudflare.com
The Bigger Picture
HTTP/3 represents a fundamental shift: the transport layer is now an application-layer concern. QUIC implements congestion control, reliability, and encryption in userspace rather than the kernel. This means faster iteration — protocol improvements don't require OS kernel updates.
The trade-off is complexity. QUIC does more work in userspace, consuming more CPU than TCP (which benefits from decades of kernel optimization). But as QUIC implementations mature and hardware offloading arrives, this gap is closing.
For most teams, the action item is straightforward: enable HTTP/3 at the CDN edge, ensure firewalls allow UDP 443, and let the CDN handle fallback. End users — especially mobile users — will notice the difference.
Key Points
- •HTTP/3 eliminates TCP head-of-line blocking by running each stream as an independent QUIC stream over UDP
- •0-RTT resumption lets returning clients send data immediately, shaving an entire round trip off connection setup
- •Connection migration means mobile users switching from WiFi to cellular don't lose their HTTP connection
- •QUIC integrates TLS 1.3 directly into the transport layer — encryption isn't optional, it's structural
- •QPACK replaces HPACK to avoid the head-of-line blocking that HPACK's dynamic table caused across streams
Key Components
| Component | Role |
|---|---|
| QUIC Transport | Provides reliable, multiplexed streams over UDP with built-in encryption and congestion control |
| 0-RTT Connection Resumption | Allows clients to send data on the very first packet when reconnecting to a known server |
| Connection Migration | Connections survive network changes (WiFi to cellular) by using connection IDs instead of IP:port tuples |
| QPACK Header Compression | Redesigned header compression that avoids HPACK's head-of-line blocking across streams |
| Per-Stream Loss Recovery | Each stream handles packet loss independently — one slow stream cannot block others |
When to Use
Deploy HTTP/3 for any public-facing web service, especially those serving mobile users or operating in high-latency/lossy network conditions. The connection migration benefit alone justifies it for mobile apps. Always implement HTTP/2 fallback.
Tool Comparison
| Tool | Type | Best For | Scale |
|---|---|---|---|
| Cloudflare | Managed | Automatic HTTP/3 with QUIC at global edge — flip a switch | Global CDN, millions of sites |
| quiche (Cloudflare) | Open Source | Production QUIC implementation in Rust for custom integrations | Embedded in Cloudflare and Nginx |
| msquic (Microsoft) | Open Source | Cross-platform QUIC library for Windows and Linux applications | Used in Windows, .NET, and Xbox |
| nginx-quic | Open Source | Adding HTTP/3 support to existing Nginx deployments | Production web serving |
Debug Checklist
- Check Alt-Svc response header — the server must advertise HTTP/3 support via Alt-Svc: h3=":443"
- Verify UDP port 443 is not blocked by firewalls or middleboxes
- Use chrome://net-internals/#quic to inspect active QUIC sessions in Chrome
- Test with curl --http3 -v to verify QUIC negotiation (requires curl built with QUIC support)
- Check DNS HTTPS (SVCB) records — they can advertise HTTP/3 support before the first connection
Common Mistakes
- Assuming HTTP/3 is just HTTP/2 over UDP — QUIC is a complete transport protocol, not a thin wrapper
- Blocking UDP at the firewall and wondering why HTTP/3 doesn't work — many corporate networks block UDP
- Using 0-RTT without understanding replay attacks — 0-RTT data can be replayed by an attacker
- Not implementing fallback to HTTP/2 — some networks and middleboxes still don't support QUIC
- Expecting HTTP/3 to be faster in all cases — on reliable networks with low latency, the difference is minimal
Real World Usage
- •Google serves over 50% of Chrome traffic via HTTP/3, reporting 2-8% latency improvements
- •Meta uses QUIC for mobile app traffic where connection migration prevents disruptions during network switches
- •Cloudflare reports HTTP/3 reduces time-to-first-byte by 12.4% compared to HTTP/2 on mobile connections
- •YouTube uses QUIC for video streaming, where per-stream loss recovery prevents buffering across video segments
- •Akamai and Fastly both support HTTP/3 at their edge, with automatic fallback to HTTP/2