The Uptime Engineer
👋 Hi, I am Yoshik
This week, you'll learn why HTTP/2 multiplexing over TCP creates catastrophic head-of-line blocking, how QUIC eliminates TCP's ordered delivery problem entirely, and the three advantages, stream independence, 0-RTT handshakes, and connection migration that make HTTP/3 unbeatable on real-world networks.
📚 Worth Your Time
#16 Uptime Sync: MySQL Repo, AI Cognitive Overload, and AWS Incident Lessons From 3000 Outages
NPM supply chain risks, monorepo misconceptions, and the real AI productivity gains hiding in plain sight
HTTP/2 runs on 84% of production systems. Most engineers think it solved the web's performance problems. It did not.
The problem HTTP/2 could not fix
HTTP/2 introduced multiplexing - multiple requests over a single TCP connection. Before this, browsers opened 6 to 8 parallel connections to load a page faster. HTTP/2 cleaned that up. One connection, many streams, headers compressed. Looked like a complete solution.
The problem is TCP.
TCP guarantees ordered delivery. If a packet gets lost in transit, TCP buffers every packet that came after it and waits for the lost one to be retransmitted. It does not matter if those buffered packets belong to completely unrelated requests. Everything waits.
This is called head-of-line blocking.
With HTTP/2 multiplexing 20 streams over one TCP connection, a single lost packet freezes all 20 streams. Not just the one the lost packet belongs to. All of them. The streams have nothing to do with each other, but TCP does not know that. It sees one connection and blocks the whole thing.
Under 2% packet loss, HTTP/2 performs worse than HTTP/1.1. At 12% packet loss, HTTP/2 takes 113 seconds to complete transfers that HTTP/3 finishes in 21 seconds. That is not a minor difference. That is HTTP/2 being 81.5% slower than the protocol it was supposed to replace.
What HTTP/3 does differently
HTTP/3 does not try to fix TCP. It replaces it entirely with something called QUIC - Quick UDP Internet Connections.
QUIC runs on UDP. The first time most engineers hear this they assume it is a bad idea. UDP is connectionless and unreliable. It has no ordering guarantees. It just fires packets and does not care whether they arrive.
But that is exactly the point. QUIC takes UDP and rebuilds reliability, congestion control, and flow control on top of it - but with one critical difference from TCP. QUIC understands streams natively.
When a packet is lost in QUIC, only the stream that packet belongs to is affected. Every other stream keeps processing immediately. The head-of-line blocking problem that TCP cannot fix is not a QUIC problem at all.

Three things QUIC does better than TCP
The first is stream independence, which we just covered. Lost packet on stream 3, streams 1, 2, 4 through 20 continue without interruption.
The second is a faster handshake. A TCP connection with TLS takes 3 round trips before a single byte of application data can flow. The client and server need to establish the TCP connection first, then negotiate TLS on top of it. QUIC combines both into a single handshake that takes 1 round trip. For connections to servers you have connected to before, QUIC can send application data in 0 round trips - it reuses cryptographic state from the previous session.
Cloudflare measured this in production. HTTP/3 reduced time-to-first-byte by 12.4% compared to HTTP/2. 176ms versus 201ms. On a high-traffic system, that difference compounds.
The third is connection migration. A TCP connection is identified by four things: source IP, source port, destination IP, destination port. If any of these change, the connection breaks. This happens every time a mobile user switches from Wi-Fi to cellular - their source IP changes and TCP starts over from scratch.
QUIC identifies connections by a connection ID that has nothing to do with IP addresses. Switch networks mid-transfer and the connection continues seamlessly. The server recognises the same connection ID and keeps going. For mobile users moving between networks constantly, this removes an entire category of reconnection overhead.
How QUIC actually works under the hood
Every QUIC packet contains a connection ID, a packet number, an encrypted payload, and stream frames. Unlike TCP, QUIC encrypts packet metadata including the packet numbers themselves. An attacker observing traffic cannot infer behaviour from packet counts or timing patterns.
Loss detection works by tracking sent packets and their acknowledgements. When an acknowledgement does not arrive within a timeout, QUIC marks the packet lost and retransmits. Retransmitted packets get new packet numbers - this is different from TCP where retransmissions reuse the original sequence number. This matters because it removes ambiguity. With TCP, when an ACK arrives for a retransmitted packet, you cannot always tell whether the ACK is for the original transmission or the retransmit. With QUIC, every packet has a unique number, so every ACK is unambiguous.
More accurate round-trip time measurements feed directly into better congestion control. QUIC implements congestion control in userspace rather than in the kernel. This means Cloudflare or any other operator can ship improved congestion control algorithms without waiting for Linux kernel updates.
When HTTP/3 wins and when it does not
HTTP/3 is not universally better. It depends on the network conditions your users are actually on.
HTTP/3 has a clear advantage when packet loss is above 2%, which covers most mobile networks, congested Wi-Fi, and long-distance connections. It also wins for small transfers under 50KB where the faster handshake saves a meaningful fraction of total request time. And it wins any time users are switching networks mid-session.
HTTP/2 still wins on clean, low-latency connections like data centre to data centre traffic over dedicated fibre. On pristine networks with near-zero packet loss, TCP's kernel-level optimisation and mature tooling outperform QUIC's userspace implementation. For large transfers over 1MB on clean networks, the handshake savings become negligible and TCP's optimised kernel code shows its advantage.
How a client discovers HTTP/3
A server cannot just start speaking HTTP/3. Clients need to know it is available first.
The first time a client connects, it uses HTTP/2 over TCP. The server responds with an Alt-Svc header that says HTTP/3 is available on UDP port 443. The client caches this information. On the next request it attempts HTTP/3 first, and falls back to HTTP/2 if QUIC is unavailable.
This fallback matters more than most people realise. UDP traffic is blocked or rate-limited by corporate firewalls, mobile carriers, and middleboxes that do not understand QUIC. 35% of websites support HTTP/3 today, but only about 40% of eligible traffic actually uses it. The rest falls back to HTTP/2 silently. A correct HTTP/3 implementation must always handle fallback cleanly - if QUIC fails, the user should never notice.
What this means for you as an engineer
If you are running a public-facing service and your users are on mobile or variable network conditions, enabling HTTP/3 is worth doing. All major CDNs - Cloudflare, Fastly, AWS CloudFront support it. For most teams, enabling HTTP/3 means a configuration change, not an infrastructure rebuild.
What you need on your end is UDP port 443 open on your load balancer and an origin server or CDN that speaks QUIC. The client-side is handled by the browser - Chrome, Firefox, and Safari all support HTTP/3.
If your traffic is primarily data centre to data centre or your users are on reliable wired connections, HTTP/2 is still the better choice. Know your packet loss numbers before deciding.
TCP has been the foundation of the web since 1974. QUIC does not throw away everything TCP built - it takes what works, fixes what cannot be fixed at the kernel level, and runs it where it can actually be improved. That is not a radical departure. That is just good engineering.
I hope you got clarity on how HTTP3 works. Thanks for reading this!
You are the best!
Join 1,000+ engineers learning DevOps the hard way
Every week, I share:
How I'd approach problems differently (real projects, real mistakes)
Career moves that actually work (not LinkedIn motivational posts)
Technical deep-dives that change how you think about infrastructure
No fluff. No roadmaps. Just what works when you're building real systems.

👋 Find me on Twitter | Linkedin | Connect 1:1
Thank you for supporting this newsletter.
Y’all are the best.
