JS Questions: What’s the Event Loop?

This is the eleventh in a series of posts I’m writing that aims to provide concise, understandable answers to JavaScript-related questions, helping me better understand both foundational concepts and the changing landscape of language updates, frameworks, testing tools, etc.

What’s the Event Loop?

Like any programming language, JavaScript has a specific process for handling function calls in a program, and more specifically, deciding when each function will run. Unlike other languages, JavaScript can be run in a browser, meaning that users may be interacting with the program by clicking a mouse or scrolling, for instance, simultaneous to other processes that are already running. The event loop helps create a fluid experience for users, by dictating a flow that allows for relatively long running processes to occur alongside client-side interactions.

Without features built into Node and browsers, JavaScript would only be able to handle one event at a time. Each function call would be added to a stack and one-by-one, popped off the top and run. It’s easy to imagine how this could become problematic in a web app. If a notice telling a user she had successfully updated her account were rendered to a webpage for three seconds using setTimeout, the user would not be able to interact with the page until the setTimeout had completed.

Some events do happen synchronously in JS. Loops will run in the call stack and prevent a webpage from re-rendering until they have completed.

However, due to the event loop built into browsers, processes such as AJAX requests and timeouts can run in the background, with their callback functions placed in a queue that are added to the call stack only when the entire stack has cleared. The below code snippet illustrates this.

If you paste this in your console, you’ll see 1 logged, then 2 returned, then 3 logged. Here’s what happens:

  1. The eventLoop function gets added to the call stack.
  2. The setTimeout timer begins and the callback function is added to the queue after 0 milliseconds.
  3. The number 1 is immediately logged.
  4. The eventLoop function returns 2 and the call stack is cleared.
  5. The setTimeout callback runs, logging 3.

Note that the call stack runs functions in ‘first in last out’ order (meaning that the first function added to a stack is the last to run). That’s why, in the code below, the strings are already concatenated when logged:

First, the greet function is added to the call stack. Then, the concat function is added to the call stack. Concat runs, the greeting is logged, and the greet function is cleared from the stack.

Meanwhile, the queue runs callback functions in ‘first in first out’ order. The first item added to the queue is the first to be added to the call stack. That means, if multiple AJAX requests are made, for instance, the first request to finish will be the first to have its callback function run.

In sum, the event loop in JavaScript refers to the synchronous and asynchronous flow dictated by the call stack and queue in browsers.




JS Questions: What is the DOM?

This is the second in a series of posts I’m writing that aims to provide concise, understandable answers to Javascript-related questions, helping me better understand both foundational concepts and the changing landscape of language updates, frameworks, testing tools, etc. 

What is the DOM?

The DOM, or Document Object Model, is the structure of HTML elements that make up a web page. It’s often described as a tree, with the document object (or whole HTML document) being the root, and the head, body, div, etc nodes as branches.

The DOM is not a Javascript-specific interface, but Javascript can be used to access and modify DOM elements. With scripts, web pages can be made to react to user inputs, such as mouse clicks, for example, by–let’s say–removing or adding elements or modifying the css. We call these interactions client-side events.

Originally, short scripts were added to otherwise static HTML pages, but now Javascript can be used as the main source code for entire web sites.




Intro to APIs

Over the holidays, I built a mini Rails app using OMDb, a movie database, to find a summary, IMDb rating, and genres for a movie based on its title. Choosing an API for this project took a lot more work than I had expected. I had no idea how many APIs were out there and the variety of purposes they can serve. So I decided to look into what exactly an API is and how they came about.

First, what is an API?

An API (Application Program Interface) is code written for use by other applications, so that functionality and information can be reused or reinterpreted. The Google Maps API, for example, has been used to build applications such as a Zombie Outbreak Simulator and PlaneFinder, an app that shows realtime air traffic.

APIs provide information in a simplified format, such as JSON (JavaScript Object Notation), defined by W3Schools as, “a syntax for storing and exchanging data.”

“Unlike Web applications themselves, APIs are built for computer consumption rather than direct user interaction.” -Meg Cater,  A Brief History of API-Based Web Applications

And how did APIs come about?

In some ways, it seems counterintuitive for a company to give away their product for others to use. First, I’ll say: not all companies are “giving away” their product; there are plenty of APIs with monthly fees.

In regards to free APIs though, I’ve identified three forces (there are probably more) that push companies to offer them. One, the ethics of open data that has been embraced by tech companies probably more than any other major industry. Two, unofficial APIs often pop up when official ones don’t exist. And three, the API itself is now seen as a valuable asset that makes a product like GoogleMaps the web’s default map provider.

Some of the earliest APIs to launch were Salesforce and eBay, both in 2000 (though this Quora post indicates that the topic is up for debate). Flickr, Twitter, Amazon, Facebook, and Google soon followed.

“The Google Maps API launched was just shy of 6 months after the release of Google Maps as an application, and was in direct response to the number of rogue applications developed that were hacking the application.” –History of APIs, apievangelist.com

APIs are becoming more widespread and essential to web applications, which more than ever integrate multiple technologies. In 2009, the US government launched the website, data.gov, a major push toward open government data.

Social media APIs, such as Twitter, are some of the best known, but APIs are becoming more ubiquitous across all types of applications. Right now there are around 16,541 APIs listed in the directory Programmable Web.

For a further introduction to APIs, I recommend checking out the five-part blog series on Programmable Web entitled, What are APIs and How do They Work?. Programmable Web is also a great source for discovering APIs through their API directory.




Latency & Bandwith

While working on Udacity’s Intro to CS a few months ago, I first came across the term latency. This week, I looked back into the topic and discovered ‘latency vs bandwidth,’ an apparently oft’ misunderstood pairing of concepts that both relate to network performance.

So what are latency and bandwidth and how do they relate to each other? In the simplest terms, latency is the delay between sending and receiving data. Bandwidth is the amount of data that can be sent or received at one time. The blog Future Chips provides one illustrative analogy:

“When you go to buy a water pipe, there are two completely independent parameters that you look at: the diameter of the pipe and its length. The diameter determines the throughput of the pipe and the length determines the latency, i.e., the time it will take for a water droplet to travel across the pipe.”

(Throughput here is the same as bandwidth.)

This video provides another simple and informative–if somewhat awkward–example using balloons.

While throughput and latency could apply to many different situations (an example from Webopedia: “in accessing data on a disk, latency is defined as the time it takes to position the proper sector under the read/write head,”) in terms of the internet, bandwidth and latency determine the time between you entering a web address on your browser and the website loading on your screen.

Essentially, when you make a web request, you are sending a packet of data to the server and eventually out to the world at large. That packet has to make several steps on its journey, each with an opportunity for delay (yipee!). (For more info about what happens, this delightfully gimmicky ninja-themed infographic is a good start.)

In the book, High Performance Browser Networking: What every web developer should know about networking and web performance, author Ilya Grigorik defines four types of delays: propogation delay–the time your data spends hurtling through space to get to its destination and back (at near light speed); transmission delay–pushing data “on the wire,” determined by file size and bandwidth;  processing delay–checking to see where the data is going and if there are any errors; and finally, queuing delay–data waiting to be processed (using our pipe analogy from earlier, the pipe is already filled with water.)

To recap, bandwidth determines the amount of data that is processed at one time; latency  is the total time between the sending and receiving data.

51-7aadghul-_sx379_bo1204203200_Closing notes: From what I’ve read so far, High Performance Browser Networking is a gold mine of information and a surprisingly engaging read on this topicIlya Grigorik is a web performance engineer at Google and is a co-chair of the W3C Web Performance Working Group, AKA he’s a very knowledgable person. The books is totally worth checking out, and available for free online. It’s also part of a Definitive Guide series of books about web development (all with animal sketches on the cover; I don’t yet know how the animals relate).

Finally, I’m a student! If you notice any errors in my understanding or terminology here, comment, please!

Understanding the Oct 21 Cyber Attack

On the Friday of the attack on Dyn that caused mass website outages, I unknowingly clicked on a link from an author’s website to her Twitter account. This lady needs to update her link, I grumbled to myself when it didn’t work. Then, I tried going straight to Twitter to search for her name. Huh, I thought, when the page wouldn’t load. A few hours later, I read an article about the Dyn cyberattack.

Obviously, it’s horrifying for a number of reasons that this kind of attack, with such widespread effects, can occur. But for me, there was a layer of excitement when I read about it, because I had just learned about how our computers access websites on the internet a few days earlier.

Basically, here’s what’s supposed to happen when you type a website address into your browser.

  1. A request is sent through the Internet Service Provider (ISP).
  2. The request lands at a Domain Name Server (DNS). The server is an actual piece of hardware that helps locate the numerical Internet Protocol (IP) address of the website.
  3. When the website is located using the IP address, packets of data are sent back to your computer, and the website loads.

To take Twitter, Reddit, and The New York Times, etc, down on Oct 21st, hackers didn’t attack the individual sites. They attacked the company (Dyn) whose servers connect us with those sites. Dyn was inundated with requests from devices that had been infected with malware, meaning that clients with legitimate requests couldn’t get through. This is called a Distributed Denial-of-Service (DDoS) attack.

More knowledgeable people out there: if I got any parts of this wrong, please inform me. (Like I said, I just learned this myself a few weeks ago, so there’s potential for error). Understanding the gist, however, has certainly helped me grasp the severity and implications of this kind of attack. I’m curious to see how the balance of security and the growing availability and dependence on internet-connected devices will play out in the future.