What exactly is HTTP polling?

What does polling actually mean? Essentially, it is a technique that has been developed to allow a client to repeatedly check if a server has new data. To avoid confusion in this post the term client refers to a web browser that we use to access websites and server refers to a web server that serves these websites.

Some online chat applications make use of polling extensively. Pretty much any real-time technologies can make use of polling. Here’s a step-by-step data flow for a simple chat app:

  1. Kevin and Jasper are logged in to a chat application. Kevin messages Jasper. A request is made. The message travels from Kevin’s client to a server.
  2. The request reaches the server.
  3. Jasper’s and Kevin’s clients both poll the server every 2 seconds.
  4. The server receives a request from Jasper’s client.
  5. The server pushes a new message out. The message appears in Jasper’s browser.

This is how polling is used to allow back-and-forth communication in real-time.

Let’s take a look at two different types of polling.

Short polling

A typical scenario:

Client: “Hey Server, is the data ready for me?” (repeats this action every 2 seconds)
Server: “No, sorry, not yet.”
Client: “Hey Server, is the data ready for me?”
Server: “No, sorry, not yet.”
Client: “Hey Server, is the data ready for me?”
Server: “Yes, here you are.”
Client: “Hey Server, is the data ready for me?” (client carries on polling infinitely)

In the above example, the client polls the server to check for updates every 2 seconds. When the new data becomes available, the server pushes it. The client receives and processes the data then continues polling straight after. Short polling requires the server to reply to every request.

This is the traditional request-response model in action. This model is client-driven. The client is in control and it decides how and when to make calls to the server. The server cannot just push data when it wants. It is client dependent and will not do anything without the client requesting the data first.

The drawback to this approach is that the client does not know ahead of time whether new data is available. It just blindly asks for it at a set time interval. You can imagine that if there are tens of thousands of clients repeatedly polling, resources on the server get wasted. It has to accept, process, allocate memory for these requests and eventually send responses back. There are, of course, ways to optimise this. We could, for instance, vary request time intervals on the client depending on how frequently the data becomes available on the server. But even then this technique is not ideal.

Long polling

With long polling, a client makes a request to a server and waits for a response. The server does not immediately respond. It waits until new data is available and only then sends the response. The client receives data and immediately sends back another request.

The benefit of this technique is that the client does not waste server's resources by repeatedly sending requests. It only makes a request when it either receives a response or times out.

Some downsides of both polling types are:

  • a new connection gets created every time, HTTP headers get sent, entries logged
  • depending on the infrastructure, when establishing a new connection it is entirely possible for the data to travel through firewalls, load balancers, etc. This creates a massive overhead

Polling could be a good solution for small-size projects but could become more of a headache for large-scale apps.

There is a more recent, reliable and light-weight protocol called WebSocket. It is well equipped to solve problems that require real-time communication between clients.

Many modern real-time applications use WebSocket together with long-polling. Long-polling is used as a fallback in case the connection cannot be upgraded to WebSocket. There are many popular libraries out there such as Socket.IO. They abstract away the low-level details of the protocol and expose a clean API. We’ll try to cover this topic in another post.

Until next time!