The rise and rise of JavaScript
I’ve been using JavaScript for a while now, but only really programming in anger with it during the last year. I’ve found it in turns frustrating and enlightening, ridiculous and brilliant. I have never felt so empowered by a language and its ecosystem, so I thought I’d take some time to write about why that is. I’m starting with a ramble through the history of JavaScript, or rather my undoubtedly inaccurate understanding of it, to provide some context for where and how I’ve been using it.
JavaScript the Survivor ¶
JavaScript has been around for a long time, and it has the scars and stories to prove it. As languages go it has more than its fair share of corner cases, incompatibility issues, frustrations and quirks. It’s an easy language to knock: it was hit with the same ugly stick as the other curly bracket languages, it’s more verbose than other dynamic languages like ruby or python, and if you can look at a piece of code and tell me what the value of the implicit this
variable will be, well, you should be getting out more. (Ok, I’m being mean here. Nowadays I usually know what the value of this
will be, but let’s just say it isn’t exactly obvious to a novice.)
JavaScript had a difficult childhood. It grew up in lawless neighbourhoods surrounded by gangs. It spent a lot of time listening to its parents fighting with one another about what they wanted it to be when it grew up. As any young language would, it tried hard to please its parents (and that barmy committee of uncles, and all the other random people trying to shape its future). As a result it suffers from what can only be described as behavioural quirks. Depending on which gang it’s hanging out with it will sometimes happily talk to you through its console.log, at other times refuse to say anything, and yet other times it will blow up in your face (but not tell you why).
It has collections, just like the other languages, but no sensible way of traversing them. Instead you are left with the delightful:
for (var key in map) {
if (map.hasOwnProperty(key)) {
var value = map[key];
// right, now we can get some work done
}
}
Now you see that var key
at the top of the for loop? That’s not declaring a variable, oh no. It’s saying that somewhere else there’s a variable called key
(right at the top of the nearest containing function, it turns out). Oh, and that key
variable is visible all through that function, and any functions it contains, not just tucked away in the for loop. Brilliant!
So you see, it’s easy to knock dear old JavaScript. Google has even gone to the trouble of coming up with an entire new language for the browser to try to appeal to the Java- and C#-loving, class-conscious (ahem) programmer. But I think they are wrong to do that. All JavaScript needs is some love, and to be treated with a little respect. Sure every now and then it will crap on the bed, but it will also reward you with a wonderful development ecosystem if you make the effort to learn its little ways.
JavaScript grows up ¶
During the browser gang wars, several things happened that started bringing peace and unity to browser-based development. JavaScript started to gain a reputation as someone who could smarten up your neighbourhood: it could provide all sorts of excitement and turn your boring old browser into a cool, asynchronous playground. However it still had its behavioural oddities, and the different gangs had virtually their own language for saying the same thing (as gangs do). Doing something in more than one neighbourhood often involved pretty much starting over.
However, a number of the other kids decided JavaScript wasn’t so damaged that it was beyond help. They realised the gangs were more similar than different and started studying their various quirks. Mediators began to emerge between JavaScript and the various gangs, so you could talk to a mediator and they would figure out what you meant. This was great, except it led to the forming of a new set of gangs. You could join the GWTs or the extjs, the prototypers or the YUIs. But once you’d chosen your path it proved difficult if not impossible to work with the other mediators. They all had a different idea about how to speak both with the gangs and with JavaScript itself. Some of them wanted to “protect” you from JavaScript’s perceived ugliness, and some even tried to pretend it was a whole other thing (say, class-based OO). Others were content just to talk to the gangs on your behalf and leave you to talk to JavaScript directly.
Then one day a new mediator called jQuery appeared on the scene. jQuery wasn’t like the others. Its designers figured that in the browser you mostly cared about exactly three things: handling events, manipulating the DOM, and talking back to your server, and that if they made those things really easy you could probably figure out the rest, including JavaScript itself. jQuery was game-changing in its elegance. (Don’t get me wrong, apparently inside the jQuery house is some pretty shocking JavaScript, but luckily no-one ever visits there.) It made use of familiar constructs, like using CSS selectors to identify elements in the DOM, just like you do. It allowed you to treat the DOM as the great big cuddly hierarchical data structure that it is, offering you the functional constructs of internal iterators, filters, chaining and so on. Suddenly the browser felt like it had been tamed.
In the meantime JavaScript had gone into therapy and emerged as ECMAScript, which meant that if you wanted JavaScript in your neighbourhood at least you had a good idea of how it wanted to behave, even if you weren’t going to let it. This would have been great except it was that barmy committee of uncles that were acting as the therapists, so things moved like treacle. (One of the uncles, Uncle Doug, has some interesting tales about just how slowly things can move). Meanwhile other events were unfolding.
The browser grows up ¶
Google, the mighty advertising search company, had been slowly and successfully making inroads into browser-based apps with GMail, Google Docs, Maps and other cool technologies. They even have their own Marauders’ Map. Then a few years ago Google moved into the browser space. I’m not an industry pundit so I won’t try to predict what their business strategy was or is, suffice to say it looked like they saw the browser as the platform of the future, and decided they may as well be the dog as the tail.
They were already stretching the browser to the limits of credibility with GMail - a fully-fledged mail, calendar and contacts app in your browser. They had to invent technologies like Google Gears to provide client-side storage, and then persuade you to install them in your browser (which was a pretty easy sell since it gave you offline mail and document editing in your browser). So they created their own browser, called it Chrome, and then decided to do three very smart things in order to make a consistent, high-performance browsing platform as ubiquitous as they needed it.
Firstly they open sourced their browser so people could see what they were up to. This meant other open source efforts, in particular Firefox, could borrow from their technical innovation, which upped the bar for everyone. (Firefox wasn’t waiting around either. With each new release of the browser and its Gecko engine came improvements in rendering times and JavaScript processing speed, as well as better support for emerging standards in the various browser technologies.) Secondly they started to push very strongly for clearer standards across browsers, giving rise to the nebulous HTML 5. Certainly there are other major players involved in HTML 5, not least the venerable Yahoo where Uncle Doug lives, but Google’s involvement created a sense of urgency that was never there before.
Thirdly they realised the browser is more than just a place to render HTML and execute JavaScript. Google Chrome contains a full suite of development tools: a REPL to interact with the JavaScript in the current page, a DOM inspector to traverse the DOM and inspect CSS styles and how they got there (not the DOM that was loaded, but its current state after all your JavaScript manipulations), a network analyser to tell you which page elements are loading (and failing) and how long they took, basically everything the jobbing web developer needs to iterate quickly. Other browsers, notably Firefox, have these available as plugins like Firebug and Web Development tools, but Chrome gives you them out of the box.
Even Microsoft decided they needed to get on board with the new wave of HTML 5. First they started referring to the emerging HTML 5 standards as “tier 1 supported technologies” which was heresy to the Microsoft Old Guard. Then they quietly dropped Silverlight, which was their competing not-invented-here browser technology (and which of course only works on Windows). After about 100 years of IE6, they released IE7, IE8 and IE9 in rapid succession, each one faster and more standards compliant than its predecessor (although it’s fair to say that IE is still the red-headed stepchild of web conformance, but at least they’re playing the game).
Now HTML 5 is a huge slew of initiatives covering not only CSS, HTML and JavaScript standards but 2d and 3d graphics, full-duplex I/O with WebSockets (and half-duplex with EventSource), sounds, video… In fact if you stand back and squint you could be forgiven for mistaking the HTML 5 ecosystem for an entire operating system. Whose system language is JavaScript.
JavaScript on the server ¶
The next piece of the JavaScript puzzle starts a couple of years ago with a young hacker named Ryan Dahl. He figured we were doing I/O all wrong on the server, and that in today’s multi-core, highly-concurrent world this was never going to scale.
Mostly we do something like this:
context = "today"; // a local variable
data = file.read(); // synchronously read data
process(data, context); // and process it
where the thread doing the read blocks until data has been read from the filesystem and loaded into a buffer. To give some idea of scale, memory is tens or hundreds of times slower than CPU cache (measured in orders of ηs), socket I/O is thousands of times slower than memory I/O (orders of μs), and file and network I/O is thousands of times slower than socket I/O (orders of ms). That means your thread could be doing literally millions of things instead of clogging up the place waiting for some data to come back.
An asynchronous version might look more like this:
context = "today"; // a local variable
file.read( // ask for some data
function(data) { // callback is invoked some time later,
process(data, context); // still bound to context
}
);
where the read function immediately returns and allows execution to continue, and at some arbitrary point in the future—when the data is available—that anonymous function is called with a reference to the context
variable still available, even if it’s gone out of scope or changed its value since the call to file.read
.
Now this is completely foreign to most server-side programmers. For a start the major languages of C#, Java and C++ are various kinds of useless at bindings and closures—and even Python and Ruby aren’t big on callbacks—so most server-side programmers aren’t used to thinking in these terms. Instead they’re quite happy to spin up a few more threads or fork and join the slow stuff. And this is just the first turtle: What if the process
function itself is asynchronous and takes a callback?
However this model is bread and butter to your JavaScript programmer. The entire browser is predicated on a Single Event Loop: There is only one conch shell so if you (or more accurately your function) has it, you can be sure no-one else has. Your concurrent modification woes evaporate. Mutable state can be shared, because it will never be concurrently accessed, even for reading (which is why the whole DOM-in-the-browser thing works so well with event handlers). They are also aware that the price of this is to give back the conch as quickly as you can. Otherwise you bring the whole browser grinding to a halt, and no-one wants that.
So if you’re going to try to pull off this kind of evented I/O shenanigans on the server, well what better language to use than one where this is the core paradigm? Ryan looked at Google Chrome’s shiny new V8 JavaScript engine and asked: what would it take to get this running on a server, outside of the browser? To be useful it would need access to the filesystem, to sockets and the network, to DNS, to processes, … and that’s probably about it. So he set to work building a server-side environment for V8, and node.js was born.
Fast-forward a couple of years and node.js has grown into a credible server-side container. As well as the core container, the node.js ecosystem contains a saner-than-most package manager called npm, and libraries for everything from Sinatra-like web servers, database connectivity, handlers for protocols like IMAP, SMTP and LDAP. In fact I find most of my time is spent coding in the problem domain rather than trying to wire together all the surrounding infrastructure cruft. (Sometimes I find myself deep in a rat hole trying to figure out why a library isn’t working, but such is the way of open source. At least I can look at the source and add some console debugging.)
It uses native evented I/O on Linux and until recently would only run with an emulated I/O layer on Windows. Until Microsoft got involved. Yes, the mighty Microsoft is actively aiding and sponsoring development of tiny little node.js to try to produce equivalent performance on Windows using its (completely different) native evented I/O libraries.
The current stable branch of node.js is showing bonkers fast performance on both Linux and Windows, and only seems to be getting faster. (The fact that Google’s V8 team are big fans and are actively considering node.js on the V8 roadmap isn’t doing any harm either: they suddenly received a slew of bug reports and feature requests as the uptake of node.js pushed the V8 engine in new and unexpected ways.)
JavaScript everywhere ¶
And it doesn’t stop there. JavaScript’s serialization form, JSON, is becoming ubiquitous as a lighter-weight alternative to XML for streaming structured data, and NoSQL databases like mongo are happily using JSON and JavaScript in the database as a query language. This means, for the first time, you can have the same JavaScript function in the browser, on the server and in the database. Just think for a moment how many times you’ve written the same input validation or data checking logic in three different technologies? I’ve not been using any database-side JavaScript in the work I’ve been doing, but it is an interesting proposition.
Tablets such as the sadly-but-hopefully-not-forever-demised HP TouchPad and the nascent Google ChromeOS tablets are using JavaScript (and node.js!) as a core technology. Instead of targeting a closed platform like iOS you can simply write a web app—possibly taking advantage of other tablet-specific APIs or event sources such as a GPS or accelerometer—and it will Just Work on your tablet device. Now that’s pretty sweet! Sadly the most open technology is available on the smallest proportion of devices, and vice versa, but you can still reach a lot of people just by writing a slick web application.
One interesting tangent is that a number of folks are considering JavaScript as a compilation target for other languages! (The implication being that JavaScript is so broken that you wouldn’t want to actually write it, but it’s ok as a kind of verbose assembler language for the browser.) The Clojure language has recently spawned ClojureScript, which is a compiler that emits JavaScript so you can run Clojure in your browser, and a new language called CoffeeScript has emerged, which is a sort of “JavaScript, the good parts with a Pythonesque syntax and a nod to Ruby.” It aligns very closely with JavaScript—you can see exactly how your CoffeeScript expressions turn into the equivalent JavaScript—but lets you get away with less syntax. I used CoffeeScript for a short while, and found that the best thing it gave me was an appreciation of how to stick to the good bits of JavaScript. I expect I’ll talk more about why I leapt into CoffeScript—and why I then decided to decaffeinate—in my next article.
And finally… ¶
One thing I keep noticing is that by being this late to the JavaScript party, a lot of the lumpier language-level problems have been solved, and solved well. One example is a library called underscore.js that describes itself as “the tie to go along with jQuery’s tux.” It elegantly provides missing functional idioms like function chaining and the usual suspects of filter
, map
, reduce
, zip
and flatten
, in a cross-browser and node-compatible way, as well as some useful methods on object hashes like keys
, values
, forEach
and extend
(to merge objects) etc. Similarly a library called async provides clean ways of chaining multiply-nested callbacks, managing multiple fork-joins, firing a function only after so many calls, etc.
As I said at the beginning of this article, I don’t remember feeling so empowered by a technology, in terms of being able to deliver full-stack applications, as I am by the combination of HTML 5 and server-side JavaScript. It’s an exciting time to be developing for the browser—and the web—and it’s never been easier.