TCP vs UDP: When Speed Beats Reliability
A practical comparison of TCP and UDP covering connection overhead, head-of-line blocking, congestion control, and exactly when to choose speed over reliability in your applications.
Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Two Protocols, One Wire, Very Different Trade-offs
Every byte you send over the internet uses either TCP or UDP at the transport layer. Your browser loading this page? TCP. That video call you were just on? UDP. The choice between them isn't academic -- it determines whether your application prioritizes reliability or speed, and getting it wrong means either sluggish performance or corrupted data.
TCP and UDP have coexisted since 1980, and neither is going away. What has changed is how we use them. Modern protocols like QUIC blur the lines, and understanding the underlying trade-offs is what separates developers who make informed architecture decisions from those who just pick whatever their framework defaults to.
What Is TCP?
Definition: TCP (Transmission Control Protocol) is a connection-oriented transport protocol that provides reliable, ordered delivery of data between applications. It uses a three-way handshake to establish connections, sequence numbers for ordering, acknowledgments for reliability, and congestion control to avoid overwhelming the network.
TCP does an enormous amount of work behind the scenes. It guarantees that every byte you send arrives at the other end, in the right order, without duplicates. If a packet gets lost, TCP detects it and retransmits. If packets arrive out of order, TCP reorders them before handing data to your application. The cost of all this reliability is latency and overhead.
What Is UDP?
Definition: UDP (User Datagram Protocol) is a connectionless transport protocol that sends datagrams without establishing a connection, guaranteeing delivery, or ordering packets. It adds minimal overhead -- just source port, destination port, length, and checksum -- making it the fastest way to send data over IP.
UDP is almost comically simple compared to TCP. It takes your data, slaps an 8-byte header on it, and sends it. No handshake, no acknowledgment, no retransmission, no ordering. If a packet gets lost, UDP doesn't know and doesn't care. That's your application's problem now.
The TCP Connection: Step by Step
Understanding the cost of TCP means understanding what happens before your first byte of application data crosses the wire:
- SYN -- The client sends a SYN (synchronize) packet to the server, proposing a connection and an initial sequence number.
- SYN-ACK -- The server responds with SYN-ACK, acknowledging the client's sequence number and proposing its own.
- ACK -- The client acknowledges the server's sequence number. The connection is now established.
- Data transfer -- Application data flows in both directions. Every segment is numbered, and the receiver acknowledges received data.
- FIN handshake -- Either side sends FIN to close. A four-way teardown (FIN, ACK, FIN, ACK) cleanly shuts down both directions.
Client Server
|--- SYN (seq=100) ------->| 1 RTT
|<-- SYN-ACK (seq=300) ----|
|--- ACK (seq=101) ------->| 1 RTT
|--- Data ----------------->| Now we can send
|<-- ACK -------------------|
That's a minimum of 1.5 round trips before data flows. Add TLS and you're at 2-3 round trips. At 100ms RTT, that's 200-300ms of pure handshake latency.
The Real Costs of TCP
Connection Overhead
Each TCP connection consumes memory on both client and server for tracking state: sequence numbers, window sizes, retransmission timers, and congestion control variables. A busy server handling 100,000 concurrent connections has 100,000 state machines running in the kernel.
Head-of-Line Blocking
TCP delivers data in order. If packet 5 is lost but packets 6 through 10 arrive, your application sees nothing until packet 5 is retransmitted and arrives. The other packets sit in a kernel buffer. This is head-of-line blocking, and it's TCP's biggest performance problem.
Congestion Control
TCP starts slow (slow start) and ramps up transmission rate until it detects loss or congestion. This is essential for network stability but means TCP connections take time to reach full throughput. Short-lived connections -- like individual HTTP requests -- may never fully utilize available bandwidth.
Retransmission Delays
When TCP detects a lost packet, it waits for a retransmission timeout (RTO) or uses fast retransmit (three duplicate ACKs). Either way, there's a delay. For real-time applications, this delay is worse than just dropping the data -- a retransmitted video frame that arrives 500ms late is useless.
TCP vs UDP: Head-to-Head Comparison
| Feature | TCP | UDP |
|---|---|---|
| Connection | Connection-oriented (handshake) | Connectionless (fire and forget) |
| Reliability | Guaranteed delivery with ACKs | No delivery guarantee |
| Ordering | Strict in-order delivery | No ordering |
| Header size | 20-60 bytes | 8 bytes |
| Flow control | Sliding window | None |
| Congestion control | Built-in (slow start, AIMD) | None (application must handle) |
| Speed | Higher latency (handshake + ACKs) | Lower latency (no overhead) |
| Use case | Web, email, file transfer, APIs | Video, gaming, DNS, streaming |
| Error detection | Checksum + retransmission | Checksum only (optional in IPv4) |
When to Use TCP
TCP is the right choice when data integrity matters more than latency:
- Web traffic (HTTP/HTTPS) -- A corrupted or missing HTML tag breaks the entire page. You need every byte.
- API calls -- REST, GraphQL, gRPC -- all rely on TCP because partial responses are useless.
- File transfers -- FTP, SCP, rsync. A missing byte in a binary corrupts the file.
- Email (SMTP/IMAP) -- Losing part of an email is unacceptable.
- Database connections -- PostgreSQL, MySQL, and Redis all use TCP. Queries must arrive complete.
When to Use UDP
UDP wins when timeliness matters more than completeness:
- Video conferencing -- A dropped frame is invisible; a delayed frame creates jarring lag. Zoom, Meet, and Teams use UDP.
- Online gaming -- Player positions update 30-60 times per second. A stale position is worse than a missing one.
- Live streaming -- Viewers tolerate a brief glitch far better than buffering while TCP retransmits.
- DNS queries -- A single request-response pair. The overhead of a TCP handshake doubles the latency for something that fits in one packet.
- VoIP -- Voice packets older than ~150ms are useless. Retransmitting them adds latency without adding value.
- IoT telemetry -- Sensors sending thousands of readings per second. Missing one reading is fine; backpressure from TCP is not.
QUIC: The Best of Both Worlds
QUIC, the protocol behind HTTP/3, is built on UDP but adds reliability, ordering, and congestion control -- per stream. It's essentially TCP's feature set reimplemented in userspace on top of UDP, with critical improvements:
- No head-of-line blocking -- Each QUIC stream is independently ordered. Lost data on one stream doesn't block others.
- Faster handshakes -- QUIC combines transport and TLS handshakes into 1 RTT (0-RTT on resumption).
- Connection migration -- Connections survive IP address changes (Wi-Fi to cellular).
- Userspace implementation -- QUIC runs in application space, not the kernel, so it can evolve without OS updates.
Pro tip: QUIC doesn't replace the TCP vs UDP decision for your own protocols. It's specifically designed for HTTP/3. If you're building a custom protocol, you still need to choose: TCP for reliability, UDP for speed, or implement your own reliability layer on UDP (which is what game developers have been doing for decades).
Pricing and Tooling for Network Protocol Analysis
Debugging TCP and UDP issues requires the right tools:
| Tool | Purpose | Cost |
|---|---|---|
| Wireshark | Packet capture and deep protocol analysis | Free (open source) |
| tcpdump | CLI packet capture on Linux/macOS | Free (built-in) |
| Datadog NPM | Network performance monitoring at scale | $5/host/month |
| AWS VPC Flow Logs | Network traffic logging in AWS | $0.50/GB ingested |
| Cloudflare Radar | Internet traffic insights and protocol adoption | Free |
Frequently Asked Questions
Is UDP faster than TCP?
UDP has lower latency because it skips the connection handshake, doesn't wait for acknowledgments, and doesn't retransmit lost packets. However, "faster" is nuanced -- UDP sends data sooner, but doesn't guarantee it arrives. For bulk data transfer, TCP with its congestion control can achieve higher sustained throughput. UDP is faster for latency-sensitive, loss-tolerant applications.
Can I use TCP for gaming?
You can, and some games do -- particularly turn-based or slower-paced games where latency isn't critical. But fast-paced multiplayer games (FPS, battle royale) use UDP because TCP's retransmission delays cause visible lag. Many game engines use UDP with a custom reliability layer that selectively retransmits only the data that actually needs to arrive.
Why does DNS use UDP?
Most DNS queries and responses fit in a single packet (under 512 bytes historically, 4096 with EDNS). A TCP handshake would triple the latency for what's a single round-trip exchange. DNS does fall back to TCP for large responses (like DNSSEC-signed records) or zone transfers, but the common case is optimized for UDP's speed.
What happens when a UDP packet is lost?
Nothing, at the protocol level. UDP doesn't detect or recover from loss. The sending application gets no indication that the packet was lost. The receiving application simply never sees the data. If your application needs reliability over UDP, you must implement your own acknowledgment and retransmission logic -- which is exactly what protocols like QUIC and many game networking libraries do.
Does TCP guarantee data arrives?
TCP guarantees that data either arrives completely, in order, and without corruption -- or the connection fails with an error. It does not guarantee delivery in a fixed time. If the network is severely degraded, TCP will keep retransmitting, backing off exponentially, until its timeout expires and the connection resets. So you get the data or you get an error, but not silence.
Why not always use TCP since it's more reliable?
TCP's reliability comes at a cost: higher latency, connection state overhead, and head-of-line blocking. For real-time applications, TCP's guarantees actively hurt user experience. A video call that freezes for 200ms while TCP retransmits a single packet is worse than one that drops a frame and moves on. The right protocol depends on whether your application values completeness or timeliness.
What is TCP Fast Open?
TCP Fast Open (TFO) is an extension that allows data to be sent with the SYN packet on repeat connections, reducing the handshake from 1.5 RTT to 1 RTT. It uses a cookie from a previous connection to authenticate the client. Support is widespread in operating systems but adoption is low because middleboxes (firewalls, NATs) often strip the TFO option. QUIC's 0-RTT achieves the same goal more reliably.
Conclusion
The TCP vs UDP decision comes down to one question: can your application tolerate lost data? If the answer is no -- web pages, API calls, file transfers -- use TCP and accept the latency overhead. If the answer is yes -- real-time video, gaming, telemetry -- use UDP and handle reliability in your application layer where it matters. And if you're building on HTTP, just use HTTP/3 and let QUIC make the trade-off for you. Stop defaulting to TCP because it's "safer." Match the protocol to the problem.
Written by
Abhishek Patel
Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.
Related Articles
DNS Explained: From Domain Name to IP Address, Step by Step
DNS translates domain names into IP addresses in milliseconds. Trace the full resolution chain step by step, learn every record type, and debug common failures like NXDOMAIN and SERVFAIL.
12 min read
NetworkingWhat is BGP? The Protocol That Runs the Internet
A comprehensive guide to BGP covering autonomous systems, route selection, BGP hijacking, RPKI, anycast routing, and how 75,000 independent networks form a single navigable internet.
11 min read
NetworkingUnderstanding CIDR Notation and Subnetting (Without the Pain)
A practical guide to CIDR notation and subnetting covering binary IP math, prefix lengths, private ranges, VPC CIDR carving, and Kubernetes subnet sizing with worked examples.
9 min read
Enjoyed this article?
Get more like this in your inbox. No spam, unsubscribe anytime.