Life of a Packet
Every HTTPS request walks through DNS, TCP, TLS, then HTTP — optimize or cache each step because latency compounds.
The Problem
When a page loads slowly or fails entirely, the team needs to know which part of the chain broke. Is it DNS? TCP? TLS? The server? Without understanding the full packet lifecycle, debugging is blind — guessing instead of measuring.
Mental Model
Like sending a letter through sorting offices. First, look up the address (DNS). Then establish a route and confirm the recipient exists (TCP handshake). Then agree on a secret code so nobody can read the letter in transit (TLS). Finally, send the actual letter and get a reply (HTTP). Every step must complete before the next can begin.
Architecture Diagram
How It Works
Here is exactly what happens when https://example.com is typed into a browser and Enter is pressed. Every step. Every protocol. Every round trip.
Step 1: DNS Resolution (20-100ms)
Before anything else, the browser needs an IP address. It checks in order:
- Browser DNS cache — Chrome caches DNS records for up to 60 seconds
- OS DNS cache — the OS-level resolver cache
- Router cache — the home router often caches DNS
- Recursive resolver — typically the ISP's DNS or a public one like
8.8.8.8or1.1.1.1
If none of these have the answer, the recursive resolver walks the DNS tree: root server → .com TLD server → example.com authoritative server. This full lookup can take 100ms+. But in practice, popular domains are cached almost everywhere and resolve in 1-5ms.
# Measure DNS resolution time
dig example.com | grep "Query time"
# ;; Query time: 12 msec
# Full timing breakdown of a request
curl -o /dev/null -s -w "DNS: %{time_namelookup}s\nTCP: %{time_connect}s\nTLS: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://example.com
Step 2: TCP Handshake (1 RTT, ~10-100ms)
With the IP address in hand, the browser opens a TCP connection. This is the famous three-way handshake:
- SYN: Browser sends a TCP segment with the SYN flag set, proposing an initial sequence number
- SYN-ACK: Server responds with its own sequence number and acknowledges the browser's
- ACK: Browser acknowledges the server's sequence number
This costs exactly one round trip — the time for a packet to travel from browser to server and back. From New York to London, that is about 70ms. From Mumbai to Virginia, about 180ms.
The SYN packet is tiny — just 64 bytes including headers. But that round trip is unavoidable and adds real latency, especially on mobile networks where RTT can be 200ms+.
Step 3: TLS Handshake (1-2 RTT, ~10-200ms)
For HTTPS (which is everything in 2025), the browser must now negotiate encryption. With TLS 1.3 (the current standard), this takes 1 additional round trip:
- ClientHello: Browser sends supported cipher suites and a key share
- ServerHello + Certificate: Server picks a cipher, sends its certificate and key share
- Finished: Both sides now have the session keys. The browser can send encrypted application data immediately
With TLS 1.2, this took 2 round trips. With TLS 1.3 0-RTT resumption, returning visitors can send encrypted data on the very first packet — but this has replay attack implications and is not enabled everywhere.
At this point, we have spent 2-3 round trips (DNS + TCP + TLS) before sending a single byte of application data. On a 100ms RTT connection, that is 200-300ms of pure protocol overhead.
Step 4: HTTP Request (Variable)
Finally, the browser sends the HTTP request over the encrypted connection:
GET /index.html HTTP/2
Host: example.com
Accept: text/html
Accept-Encoding: gzip, br
User-Agent: Chrome/125
The request headers are typically 200-800 bytes. HTTP/2 compresses headers with HPACK, which helps significantly for subsequent requests that share many common headers.
Step 5: Server Processing (Variable)
The server receives the request, processes it (database queries, template rendering, business logic), and sends back the response. This is the Time to First Byte (TTFB) from the server's perspective.
A well-optimized API endpoint responds in 5-50ms. A complex page requiring multiple database queries might take 100-500ms. This is the part most backend engineers focus on, but it is often not the dominant factor in total page load time.
Step 6: Response Travels Back
The response takes the same journey in reverse — through routers, across links, back to the browser. A typical HTML page is 20-100KB compressed. At broadband speeds, this transfers in milliseconds. On a 3G mobile connection, it could take seconds.
The Complete Timeline
Here is a realistic timing breakdown for a first visit to a site with a 50ms RTT:
| Step | Duration | Cumulative |
|---|---|---|
| DNS lookup (cached at resolver) | 10ms | 10ms |
| TCP handshake | 50ms (1 RTT) | 60ms |
| TLS 1.3 handshake | 50ms (1 RTT) | 110ms |
| HTTP request flight time | 25ms (half RTT) | 135ms |
| Server processing | 40ms | 175ms |
| Response flight time | 25ms (half RTT) | 200ms |
| Total time to first byte | 200ms |
For a repeat visit with connection reuse: DNS is cached (0ms), TCP connection is already open (0ms), TLS session is resumed (0ms), so it goes straight to HTTP — saving 110ms or more.
Production Considerations
Optimizing Each Phase
DNS: Use DNS prefetching (<link rel="dns-prefetch" href="//api.example.com">) to resolve hostnames for resources the page will need. Keep DNS TTLs at 60-300 seconds — short enough for quick failover, long enough to cache.
TCP: Enable TCP Fast Open (TFO) to piggyback data on the SYN packet for repeat connections. Use connection pooling on the server side to avoid per-request handshake overhead for backend-to-backend calls.
TLS: Deploy TLS 1.3 everywhere. Enable OCSP stapling so the browser does not need to separately verify the certificate. Use session tickets for 0-RTT resumption.
HTTP: Use HTTP/2 or HTTP/3 to multiplex requests over a single connection. Enable compression (Brotli over gzip). Set appropriate Cache-Control headers to avoid unnecessary round trips entirely.
# Full request lifecycle timing with curl
curl -o /dev/null -s -w "\
namelookup: %{time_namelookup}s\n\
connect: %{time_connect}s\n\
appconnect: %{time_appconnect}s\n\
pretransfer: %{time_pretransfer}s\n\
starttransfer: %{time_starttransfer}s\n\
total: %{time_total}s\n\
size_download: %{size_download} bytes\n" \
https://example.com
When Things Go Wrong
The most common failure modes, mapped to where they occur in the lifecycle:
| Failure | Phase | Symptom | Tool |
|---|---|---|---|
| DNS SERVFAIL | Step 1 | "Could not resolve host" | dig +trace |
| TCP timeout | Step 2 | Hangs for 20-30s then fails | tcpdump, check firewall rules |
| TLS cert expired | Step 3 | ERR_CERT_DATE_INVALID | openssl s_client -connect |
| TLS version mismatch | Step 3 | Handshake failure | Check server config for TLS 1.2+ |
| HTTP 502 | Step 5 | Bad Gateway | Check upstream server health |
| Slow TTFB | Step 5 | Page loads but takes ages | Profile server-side processing |
Why This Matters for System Design
In system design interviews and production architecture, understanding the packet lifecycle changes how decisions get made:
- Why CDNs work: They eliminate most of the round trips by placing content closer to the user. A CDN edge 5ms away vs an origin 100ms away saves 190ms on a cold start (DNS + TCP + TLS).
- Why connection pooling matters: Backend services making HTTP calls to other services pay the TCP+TLS cost on every new connection. A pool of pre-established connections eliminates this entirely.
- Why HTTP/3 matters: QUIC combines the TCP and TLS handshakes into a single round trip, and supports 0-RTT for repeat connections. This removes an entire step from the chain.
- Why edge computing is growing: Running compute at the CDN edge means the entire round trip to origin is skipped for dynamic content too, not just cached static assets.
The packet lifecycle is not just theory. It is the performance budget for every networked interaction in the system. Every optimization in this chain — DNS prefetch, connection reuse, TLS 1.3, edge computing — is an attempt to shave milliseconds off this fundamental sequence.
Key Points
- •A single HTTPS page load involves at least 4 different protocols working in sequence: DNS, TCP, TLS, HTTP.
- •The first request to a new host is the most expensive — DNS lookup, TCP handshake, and TLS handshake all add latency.
- •Subsequent requests on the same connection skip DNS (cached), TCP (keep-alive), and TLS (session resumption).
- •A packet crosses 10-20 router hops on average to travel across the internet, each adding microseconds to milliseconds.
- •Understanding this sequence is the foundation for optimizing web performance — every millisecond saved in the chain compounds.
Key Components
| Component | Role |
|---|---|
| DNS Resolution | Translates the hostname into an IP address before any connection can begin |
| TCP Handshake | Establishes a reliable connection with SYN, SYN-ACK, ACK — takes one round trip |
| TLS Handshake | Negotiates encryption parameters and exchanges certificates — adds 1-2 more round trips |
| HTTP Request/Response | The actual application data exchange — request headers, body, response status, and payload |
| Routing Hops | Each router between source and destination makes an independent forwarding decision based on its routing table |
When to Use
Use this mental model when diagnosing slow page loads, connection failures, or intermittent timeouts. Break the problem into phases and measure each one independently.
Tool Comparison
| Tool | Type | Best For | Scale |
|---|---|---|---|
| tcpdump | Open Source | Capturing the full packet lifecycle on a server with minimal overhead | Small-Enterprise |
| Wireshark | Open Source | Visual analysis of the complete request sequence with timing breakdowns | Small-Enterprise |
| Chrome DevTools (Network tab) | Open Source | Browser-side timing breakdown: DNS, TCP, TLS, TTFB, content download | Small-Enterprise |
| mtr (My Traceroute) | Open Source | Combining ping and traceroute to show packet loss and latency at each hop | Small-Enterprise |
Debug Checklist
- Check DNS: Run 'curl -o /dev/null -w "DNS: %{time_namelookup}s" https://example.com' to isolate DNS latency.
- Check TCP: Compare time_connect vs time_namelookup. Large gap means network latency or firewall issues.
- Check TLS: Compare time_appconnect vs time_connect. Slow TLS usually means certificate chain issues or missing OCSP stapling.
- Check TTFB: Compare time_starttransfer vs time_appconnect. This is pure server processing time.
- Check Content: Compare time_total vs time_starttransfer. Large gap means large response body or slow bandwidth.
Common Mistakes
- Assuming DNS is instant. A cold DNS lookup can take 20-120ms, and it happens before anything else.
- Forgetting that TLS adds round trips. TLS 1.2 adds 2 round trips; TLS 1.3 reduces this to 1, but it is still not free.
- Not reusing TCP connections. Each new connection costs a 3-way handshake — use HTTP keep-alive or connection pooling.
- Ignoring the return path. Packets can take different routes in each direction (asymmetric routing), causing confusing latency patterns.
- Blaming the server when the real bottleneck is the network. Use traceroute and packet captures before diving into application code.
Real World Usage
- •Google optimized the packet lifecycle aggressively: QUIC eliminates the separate TCP+TLS handshakes, saving 1-2 round trips.
- •Cloudflare's Anycast network ensures the first hop to their edge is under 10ms for 95% of the world's internet users.
- •AWS CloudFront uses persistent connections to origin servers, avoiding TCP/TLS handshake costs for cache misses.
- •Netflix pre-connects TCP and TLS to streaming servers before the user hits play, hiding handshake latency entirely.
- •Facebook (Meta) developed QUIC 0-RTT resume, enabling repeat visitors to send data on the very first packet.