Latency & Bandwith

While working on Udacity’s Intro to CS a few months ago, I first came across the term latency. This week, I looked back into the topic and discovered ‘latency vs bandwidth,’ an apparently oft’ misunderstood pairing of concepts that both relate to network performance.

So what are latency and bandwidth and how do they relate to each other? In the simplest terms, latency is the delay between sending and receiving data. Bandwidth is the amount of data that can be sent or received at one time. The blog Future Chips provides one illustrative analogy:

“When you go to buy a water pipe, there are two completely independent parameters that you look at: the diameter of the pipe and its length. The diameter determines the throughput of the pipe and the length determines the latency, i.e., the time it will take for a water droplet to travel across the pipe.”

(Throughput here is the same as bandwidth.)

This video provides another simple and informative–if somewhat awkward–example using balloons.

While throughput and latency could apply to many different situations (an example from Webopedia: “in accessing data on a disk, latency is defined as the time it takes to position the proper sector under the read/write head,”) in terms of the internet, bandwidth and latency determine the time between you entering a web address on your browser and the website loading on your screen.

Essentially, when you make a web request, you are sending a packet of data to the server and eventually out to the world at large. That packet has to make several steps on its journey, each with an opportunity for delay (yipee!). (For more info about what happens, this delightfully gimmicky ninja-themed infographic is a good start.)

In the book, High Performance Browser Networking: What every web developer should know about networking and web performance, author Ilya Grigorik defines four types of delays: propogation delay–the time your data spends hurtling through space to get to its destination and back (at near light speed); transmission delay–pushing data “on the wire,” determined by file size and bandwidth;  processing delay–checking to see where the data is going and if there are any errors; and finally, queuing delay–data waiting to be processed (using our pipe analogy from earlier, the pipe is already filled with water.)

To recap, bandwidth determines the amount of data that is processed at one time; latency  is the total time between the sending and receiving data.

51-7aadghul-_sx379_bo1204203200_Closing notes: From what I’ve read so far, High Performance Browser Networking is a gold mine of information and a surprisingly engaging read on this topicIlya Grigorik is a web performance engineer at Google and is a co-chair of the W3C Web Performance Working Group, AKA he’s a very knowledgable person. The books is totally worth checking out, and available for free online. It’s also part of a Definitive Guide series of books about web development (all with animal sketches on the cover; I don’t yet know how the animals relate).

Finally, I’m a student! If you notice any errors in my understanding or terminology here, comment, please!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s