Networking

WebSockets vs SSE vs Long Polling: When to Use Each

A practical comparison of WebSockets, Server-Sent Events, and long polling with code examples, use case mapping, scaling strategies, and a clear decision framework.

A
Abhishek Patel10 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

WebSockets vs SSE vs Long Polling: When to Use Each
WebSockets vs SSE vs Long Polling: When to Use Each

Three Ways to Push Data to a Browser

HTTP was designed as request-response: the client asks, the server answers. But modern applications need the server to initiate communication -- chat messages, live scores, stock tickers, notifications. You have three mainstream options for server-to-client communication: WebSockets, Server-Sent Events (SSE), and long polling. Each has distinct trade-offs in complexity, scalability, and browser support, and picking the wrong one creates headaches that compound as you scale.

I've built production systems with all three. Here's what actually matters when choosing between them.

What Is Long Polling?

Definition: Long polling is a technique where the client sends an HTTP request and the server holds the connection open until new data is available, then responds. The client immediately sends another request, creating a loop that simulates real-time server push using standard HTTP request-response semantics.

Long polling is the oldest trick in the book. It works everywhere -- any HTTP client, any server, any proxy, any load balancer. The client sends a normal GET request, and instead of responding immediately, the server waits. When data becomes available (or a timeout occurs), the server responds and the client instantly reconnects.

// Client-side long polling
async function poll() {
  try {
    const response = await fetch('/api/events?since=lastEventId');
    const data = await response.json();
    handleEvents(data);
  } catch (error) {
    await new Promise(resolve => setTimeout(resolve, 3000));
  }
  poll(); // Immediately reconnect
}
poll();
// Server-side (Express)
app.get('/api/events', async (req, res) => {
  const since = req.query.since;
  const timeout = setTimeout(() => {
    res.json({ events: [], lastEventId: since });
  }, 30000); // 30s timeout

  const events = await waitForEvents(since, 30000);
  clearTimeout(timeout);
  res.json({ events, lastEventId: events.at(-1)?.id ?? since });
});

Long Polling Trade-offs

  • Pro: Works through every proxy, firewall, and load balancer. Zero browser compatibility concerns.
  • Pro: Uses standard HTTP -- no special server support needed.
  • Con: Each "push" consumes a full HTTP request-response cycle with headers, cookies, and all the overhead.
  • Con: Holding connections open ties up server threads/connections. At 10,000 concurrent users, you have 10,000 pending HTTP requests.
  • Con: Latency gap between server getting data and client receiving it depends on timing. If data arrives right after the client disconnects to reconnect, there's a delay.

What Is Server-Sent Events (SSE)?

Definition: Server-Sent Events (SSE) is a browser API and protocol where the server pushes events to the client over a single, long-lived HTTP connection. The server sends text-based event streams using a simple format. SSE is unidirectional (server to client only), auto-reconnects on failure, and works over standard HTTP.

SSE uses the EventSource API in the browser and a simple text protocol on the server. The server sends events as plain text lines, each prefixed with data:, separated by blank lines. It's stupidly simple, and that's its greatest strength.

// Client-side SSE
const source = new EventSource('/api/stream');

source.addEventListener('message', (event) => {
  const data = JSON.parse(event.data);
  handleUpdate(data);
});

source.addEventListener('error', () => {
  console.log('Connection lost, auto-reconnecting...');
  // EventSource reconnects automatically
});
// Server-side (Express)
app.get('/api/stream', (req, res) => {
  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    'Connection': 'keep-alive',
  });

  const sendEvent = (data) => {
    res.write('data: ' + JSON.stringify(data) + '\n\n');
  };

  // Subscribe to your event source
  const unsubscribe = eventBus.subscribe(sendEvent);

  req.on('close', () => {
    unsubscribe();
  });
});

SSE Protocol Format

The wire format is remarkably simple:

event: priceUpdate
data: {"symbol": "AAPL", "price": 178.52}
id: 1234

event: priceUpdate
data: {"symbol": "GOOG", "price": 141.80}
id: 1235

The id field enables automatic resume. If the connection drops, the browser reconnects and sends a Last-Event-ID header so the server can replay missed events.

What Are WebSockets?

Definition: WebSocket is a protocol (RFC 6455) providing full-duplex, bidirectional communication over a single TCP connection. After an HTTP upgrade handshake, the connection switches to a binary frame-based protocol where both client and server can send messages independently at any time without the overhead of HTTP headers per message.

WebSockets start as an HTTP request with an Upgrade header. If the server agrees, the connection switches from HTTP to the WebSocket protocol -- a fundamentally different, lower-overhead framing protocol. From that point, both sides can send messages whenever they want.

// Client-side WebSocket
const ws = new WebSocket('wss://example.com/ws');

ws.onopen = () => {
  ws.send(JSON.stringify({ type: 'subscribe', channel: 'prices' }));
};

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  handleUpdate(data);
};

ws.onclose = () => {
  // Must implement reconnection logic yourself
  setTimeout(() => connect(), 3000);
};
// Server-side (Node.js with ws library)
import { WebSocketServer } from 'ws';

const wss = new WebSocketServer({ port: 8080 });

wss.on('connection', (ws) => {
  ws.on('message', (message) => {
    const parsed = JSON.parse(message);
    if (parsed.type === 'subscribe') {
      subscribeToChannel(ws, parsed.channel);
    }
  });

  ws.on('close', () => {
    cleanupConnection(ws);
  });
});

WebSockets vs SSE vs Long Polling: Head-to-Head

FeatureLong PollingSSEWebSockets
DirectionClient-initiated (simulated push)Server to client onlyFull duplex (bidirectional)
ProtocolStandard HTTPHTTP with text/event-streamWebSocket protocol (after HTTP upgrade)
Connection overheadNew request per updateSingle persistent HTTP connectionSingle persistent TCP connection
Auto-reconnectManual implementationBuilt into EventSource APIManual implementation
Resume supportManual (track last event)Built-in via Last-Event-IDManual implementation
Binary dataPossible (base64 encoded)Text onlyNative binary frame support
Browser supportUniversalAll modern browsersAll modern browsers
Proxy/LB compatibilityExcellentGood (some proxies buffer)Requires WebSocket-aware proxy
Max connections (browser)6 per domain (HTTP/1.1)6 per domain (HTTP/1.1)No HTTP limit (separate protocol)
ComplexityLowLowMedium-High

Use Case Decision Framework

Here's how I decide which technology to use for a given feature:

  1. Does the client need to send frequent messages to the server? If yes, use WebSockets. SSE and long polling require separate HTTP requests for client-to-server communication.
  2. Is it server-to-client updates only? If yes, use SSE. It's simpler, auto-reconnects, has built-in resume, and works with standard HTTP infrastructure.
  3. Do you need it to work through aggressive corporate proxies? If yes, long polling is your safest bet. Some proxies buffer SSE or don't support WebSocket upgrades.
  4. Is this a chat or collaborative editing feature? WebSockets. The bidirectional, low-latency nature is essential.
  5. Is this a dashboard, feed, or notification system? SSE. The server pushes updates; the client just renders them.

Pro tip: SSE over HTTP/2 solves the 6-connection browser limit because HTTP/2 multiplexes streams over a single TCP connection. If your users are on modern browsers (they are), this removes SSE's biggest practical limitation.

Scaling Considerations

Long Polling at Scale

Each pending request holds a connection. With thread-per-connection servers (like traditional PHP or Java servlet containers), this scales poorly. Async runtimes (Node.js, Go, Rust) handle it better since pending requests don't block threads. You'll still hit connection limits on load balancers and reverse proxies.

SSE at Scale

One persistent connection per client. The connection count is identical to WebSockets, but SSE is easier to load-balance because it's plain HTTP. You can use any HTTP load balancer with sticky sessions or connection-aware routing. Redis Pub/Sub or NATS behind your servers lets any instance push events to any connected client.

WebSockets at Scale

WebSocket connections are stateful -- you need to know which server holds which client's connection. This requires either sticky sessions on your load balancer or a pub/sub backbone (Redis, NATS, Kafka) that broadcasts events to all server instances. Load balancers must support WebSocket upgrades. Health checks are more complex because the connection is long-lived.

Service and Library Recommendations

SolutionTypeBest ForPricing
Socket.IOLibraryWebSockets with fallbackFree (open source)
AblyManaged servicePub/sub with presence and historyFree tier / $29+/mo
PusherManaged serviceReal-time channels for web/mobileFree tier / $49+/mo
AWS AppSyncManaged serviceGraphQL subscriptions$2/million connections
CentrifugoSelf-hostedReal-time messaging serverFree (open source)

Watch out: Managed real-time services charge per message and per connection. A chat app with 10,000 concurrent users sending 5 messages/minute generates 72 million messages/day. Run the numbers before committing to a managed service -- self-hosting Centrifugo or a custom WebSocket server is often cheaper at scale.

Frequently Asked Questions

Should I use WebSockets or SSE for a notification system?

SSE. Notifications are server-to-client only, which is exactly what SSE is designed for. You get auto-reconnection and resume for free. WebSockets add unnecessary complexity when the client doesn't need to send real-time messages back. The client can use standard HTTP POST requests for any actions triggered by notifications.

Does Socket.IO use WebSockets?

Socket.IO uses WebSockets as its primary transport but starts with long polling and upgrades to WebSockets if available. It also adds features on top: automatic reconnection, room-based broadcasting, acknowledgments, and binary support. The downside is Socket.IO uses a custom protocol, so you can't connect with a plain WebSocket client -- both sides must use the Socket.IO library.

Can SSE handle thousands of concurrent connections?

Yes, with the right server architecture. Node.js, Go, and Rust handle SSE connections efficiently because they don't dedicate a thread per connection. A single Node.js process can handle 10,000+ SSE connections comfortably. The bottleneck is usually the load balancer's connection limit or memory, not the SSE protocol itself.

Why not just use WebSockets for everything?

WebSockets add complexity: custom reconnection logic, message acknowledgment if you need it, connection state management, WebSocket-aware load balancers, and a different debugging model (you can't just curl a WebSocket endpoint). If your use case is server-to-client updates, SSE gives you 90% of the benefit with 30% of the complexity. Use the simplest tool that solves your problem.

Does long polling still make sense in 2025?

Rarely for new projects. SSE has universal browser support now, and WebSockets have been supported everywhere for over a decade. Long polling still makes sense when you're dealing with extremely restrictive network environments (some corporate proxies) or building on infrastructure that doesn't support persistent connections. It's also a reasonable fallback strategy.

How do I handle authentication with WebSockets?

The WebSocket handshake is an HTTP request, so you can include cookies or an authorization header. For token-based auth, send the token as a query parameter during the handshake or send it as the first WebSocket message after connection. Don't put JWTs in query strings in production since they end up in server logs. Use a short-lived ticket exchanged via your REST API instead.

Can I use SSE with HTTP/2?

Yes, and you should. HTTP/2 multiplexes multiple streams over a single TCP connection, which eliminates the browser's 6-connection-per-domain limit for SSE. This means you can open multiple SSE connections to different endpoints without starving your other HTTP requests. Most modern deployments serve SSE over HTTP/2 by default.

Conclusion

Default to SSE for server-to-client updates -- it's simpler, reconnects automatically, and works with existing HTTP infrastructure. Reach for WebSockets only when you need bidirectional communication with low latency (chat, collaborative editing, multiplayer games). Long polling is a fallback for hostile network environments, not a first choice. Whatever you pick, plan your scaling strategy around a pub/sub backbone early. Retrofitting real-time architecture onto a system that wasn't designed for it is far more painful than getting it right from the start.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.