The rise and rise of JavaScript

I’ve been using JavaScript for a while now, but only really programming in anger with it during the last year. I’ve found it in turns frustrating and enlightening, ridiculous and brilliant. I have never felt so empowered by a language and its ecosystem, so I thought I’d take some time to write about why that is. I’m starting with a ramble through the history of JavaScript, or rather my undoubtedly inaccurate understanding of it, to provide some context for where and how I’ve been using it.

JavaScript the Survivor

JavaScript has been around for a long time, and it has the scars and stories to prove it. As languages go it has more than its fair share of corner cases, incompatibility issues, frustrations and quirks. It’s an easy language to knock: it was hit with the same ugly stick as the other curly bracket languages, it’s more verbose than other dynamic languages like ruby or python, and if you can look at a piece of code and tell me what the value of the implicit this variable will be, well, you should be getting out more. (Ok, I’m being mean here. Nowadays I usually know what the value of this will be, but let’s just say it isn’t exactly obvious to a novice.)

JavaScript had a difficult childhood. It grew up in lawless neighbourhoods surrounded by gangs. It spent a lot of time listening to its parents fighting with one another about what they wanted it to be when it grew up. As any young language would, it tried hard to please its parents (and that barmy committee of uncles, and all the other random people trying to shape its future). As a result it suffers from what can only be described as behavioural quirks. Depending on which gang it’s hanging out with it will sometimes happily talk to you through its console.log, at other times refuse to say anything, and yet other times it will blow up in your face (but not tell you why).

It has collections, just like the other languages, but no sensible way of traversing them. Instead you are left with the delightful:

for (var key in map) {
  if (map.hasOwnProperty(key)) {
    var value = map[key];
    // right, now we can get some work done
  }
}

Now you see that var key at the top of the for loop? That’s not declaring a variable, oh no. It’s saying that somewhere else there’s a variable called key (right at the top of the nearest containing function, it turns out). Oh, and that key variable is visible all through that function, and any functions it contains, not just tucked away in the for loop. Brilliant!

So you see, it’s easy to knock dear old JavaScript. Google has even gone to the trouble of coming up with an entire new language for the browser to try to appeal to the Java- and C#-loving, class-conscious (ahem) programmer. But I think they are wrong to do that. All JavaScript needs is some love, and to be treated with a little respect. Sure every now and then it will crap on the bed, but it will also reward you with a wonderful development ecosystem if you make the effort to learn its little ways.

JavaScript grows up

During the browser gang wars, several things happened that started bringing peace and unity to browser-based development. JavaScript started to gain a reputation as someone who could smarten up your neighbourhood: it could provide all sorts of excitement and turn your boring old browser into a cool, asynchronous playground. However it still had its behavioural oddities, and the different gangs had virtually their own language for saying the same thing (as gangs do). Doing something in more than one neighbourhood often involved pretty much starting over.

However, a number of the other kids decided JavaScript wasn’t so damaged that it was beyond help. They realised the gangs were more similar than different and started studying their various quirks. Mediators began to emerge between JavaScript and the various gangs, so you could talk to a mediator and they would figure out what you meant. This was great, except it led to the forming of a new set of gangs. You could join the GWTs or the extjs, the prototypers or the YUIs. But once you’d chosen your path it proved difficult if not impossible to work with the other mediators. They all had a different idea about how to speak both with the gangs and with JavaScript itself. Some of them wanted to “protect” you from JavaScript’s perceived ugliness, and some even tried to pretend it was a whole other thing (say, class-based OO). Others were content just to talk to the gangs on your behalf and leave you to talk to JavaScript directly.

Then one day a new mediator called jQuery appeared on the scene. jQuery wasn’t like the others. Its designers figured that in the browser you mostly cared about exactly three things: handling events, manipulating the DOM, and talking back to your server, and that if they made those things really easy you could probably figure out the rest, including JavaScript itself. jQuery was game-changing in its elegance. (Don’t get me wrong, apparently inside the jQuery house is some pretty shocking JavaScript, but luckily no-one ever visits there.) It made use of familiar constructs, like using CSS selectors to identify elements in the DOM, just like you do. It allowed you to treat the DOM as the great big cuddly hierarchical data structure that it is, offering you the functional constructs of internal iterators, filters, chaining and so on. Suddenly the browser felt like it had been tamed.

In the meantime JavaScript had gone into therapy and emerged as ECMAScript, which meant that if you wanted JavaScript in your neighbourhood at least you had a good idea of how it wanted to behave, even if you weren’t going to let it. This would have been great except it was that barmy committee of uncles that were acting as the therapists, so things moved like treacle. (One of the uncles, Uncle Doug, has some interesting tales about just how slowly things can move). Meanwhile other events were unfolding.

The browser grows up

Google, the mighty advertising search company, had been slowly and successfully making inroads into browser-based apps with GMail, Google Docs, Maps and other cool technologies. They even have their own Marauders’ Map. Then a few years ago Google moved into the browser space. I’m not an industry pundit so I won’t try to predict what their business strategy was or is, suffice to say it looked like they saw the browser as the platform of the future, and decided they may as well be the dog as the tail.

They were already stretching the browser to the limits of credibility with GMail – a fully-fledged mail, calendar and contacts app in your browser. They had to invent technologies like Google Gears to provide client-side storage, and then persuade you to install them in your browser (which was a pretty easy sell since it gave you offline mail and document editing in your browser). So they created their own browser, called it Chrome, and then decided to do three very smart things in order to make a consistent, high-performance browsing platform as ubiquitous as they needed it.

Firstly they open sourced their browser so people could see what they were up to. This meant other open source efforts, in particular Firefox, could borrow from their technical innovation, which upped the bar for everyone. (Firefox wasn’t waiting around either. With each new release of the browser and its Gecko engine came improvements in rendering times and JavaScript processing speed, as well as better support for emerging standards in the various browser technologies.) Secondly they started to push very strongly for clearer standards across browsers, giving rise to the nebulous HTML 5. Certainly there are other major players involved in HTML 5, not least the venerable Yahoo where Uncle Doug lives, but Google’s involvement created a sense of urgency that was never there before.

Thirdly they realised the browser is more than just a place to render HTML and execute JavaScript. Google Chrome contains a full suite of development tools: a REPL to interact with the JavaScript in the current page, a DOM inspector to traverse the DOM and inspect CSS styles and how they got there (not the DOM that was loaded, but its current state after all your JavaScript manipulations), a network analyser to tell you which page elements are loading (and failing) and how long they took, basically everything the jobbing web developer needs to iterate quickly. Other browsers, notably Firefox, have these available as plugins like Firebug and Web Development tools, but Chrome gives you them out of the box.

Even Microsoft decided they needed to get on board with the new wave of HTML 5. First they started referring to the emerging HTML 5 standards as “tier 1 supported technologies” which was heresy to the Microsoft Old Guard. Then they quietly dropped Silverlight, which was their competing not-invented-here browser technology (and which of course only works on Windows). After about 100 years of IE6, they released IE7, IE8 and IE9 in rapid succession, each one faster and more standards compliant than its predecessor (although it’s fair to say that IE is still the red-headed stepchild of web conformance, but at least they’re playing the game).

Now HTML 5 is a huge slew of initiatives covering not only CSS, HTML and JavaScript standards but 2d and 3d graphics, full-duplex I/O with WebSockets (and half-duplex with EventSource), sounds, video… In fact if you stand back and squint you could be forgiven for mistaking the HTML 5 ecosystem for an entire operating system. Whose system language is JavaScript.

JavaScript on the server

The next piece of the JavaScript puzzle starts a couple of years ago with a young hacker named Ryan Dahl. He figured we were doing I/O all wrong on the server, and that in today’s multi-core, highly-concurrent world this was never going to scale.

Mostly we do something like this:

context = "today";      // a local variable
data = file.read();     // synchronously read data
process(data, context); // and process it

where the thread doing the read blocks until data has been read from the filesystem and loaded into a buffer. To give some idea of scale, memory is tens or hundreds of times slower than CPU cache (measured in orders of ηs), socket I/O is thousands of times slower than memory I/O (orders of μs), and file and network I/O is thousands of times slower than socket I/O (orders of ms). That means your thread could be doing literally millions of things instead of clogging up the place waiting for some data to come back.

An asynchronous version might look more like this:

context = "today";  // a local variable
file.read(          // ask for some data
  function(data) {           // callback is invoked some time later,
    process(data, context);  // still bound to context
  }
);

where the read function immediately returns and allows execution to continue, and at some arbitrary point in the future — when the data is available — that anonymous function is called with a reference to the context variable still available, even if it’s gone out of scope or changed its value since the call to file.read.

Now this is completely foreign to most server-side programmers. For a start the major languages of C#, Java and C++ are various kinds of useless at bindings and closures — and even Python and Ruby aren’t big on callbacks — so most server-side programmers aren’t used to thinking in these terms. Instead they’re quite happy to spin up a few more threads or fork and join the slow stuff. And this is just the first turtle: What if the process function itself is asynchronous and takes a callback?

However this model is bread and butter to your JavaScript programmer. The entire browser is predicated on a Single Event Loop: There is only one conch shell so if you (or more accurately your function) has it, you can be sure no-one else has. Your concurrent modification woes evaporate. Mutable state can be shared, because it will never be concurrently accessed, even for reading (which is why the whole DOM-in-the-browser thing works so well with event handlers). They are also aware that the price of this is to give back the conch as quickly as you can. Otherwise you bring the whole browser grinding to a halt, and no-one wants that.

So if you’re going to try to pull off this kind of evented I/O shenanigans on the server, well what better language to use than one where this is the core paradigm? Ryan looked at Google Chrome’s shiny new V8 JavaScript engine and asked: what would it take to get this running on a server, outside of the browser? To be useful it would need access to the filesystem, to sockets and the network, to DNS, to processes, … and that’s probably about it. So he set to work building a server-side environment for V8, and node.js was born.

Fast-forward a couple of years and node.js has grown into a credible server-side container. As well as the core container, the node.js ecosystem contains a saner-than-most package manager called npm, and libraries for everything from Sinatra-like web servers, database connectivity, handlers for protocols like IMAP, SMTP and LDAP. In fact I find most of my time is spent coding in the problem domain rather than trying to wire together all the surrounding infrastructure cruft. (Sometimes I find myself deep in a rat hole trying to figure out why a library isn’t working, but such is the way of open source. At least I can look at the source and add some console debugging.)

It uses native evented I/O on Linux and until recently would only run with an emulated I/O layer on Windows. Until Microsoft got involved. Yes, the mighty Microsoft is actively aiding and sponsoring development of tiny little node.js to try to produce equivalent performance on Windows using its (completely different) native evented I/O libraries.

The current stable branch of node.js is showing bonkers fast performance on both Linux and Windows, and only seems to be getting faster. (The fact that Google’s V8 team are big fans and are actively considering node.js on the V8 roadmap isn’t doing any harm either: they suddenly received a slew of bug reports and feature requests as the uptake of node.js pushed the V8 engine in new and unexpected ways.)

JavaScript everywhere

And it doesn’t stop there. JavaScript’s serialization form, JSON, is becoming ubiquitous as a lighter-weight alternative to XML for streaming structured data, and NoSQL databases like mongo are happily using JSON and JavaScript in the database as a query language. This means, for the first time, you can have the same JavaScript function in the browser, on the server and in the database. Just think for a moment how many times you’ve written the same input validation or data checking logic in three different technologies? I’ve not been using any database-side JavaScript in the work I’ve been doing, but it is an interesting proposition.

Tablets such as the sadly-but-hopefully-not-forever-demised HP TouchPad and the nascent Google ChromeOS tablets are using JavaScript (and node.js!) as a core technology. Instead of targeting a closed platform like iOS you can simply write a web app — possibly taking advantage of other tablet-specific APIs or event sources such as a GPS or accelerometer — and it will Just Work on your tablet device. Now that’s pretty sweet! Sadly the most open technology is available on the smallest proportion of devices, and vice versa, but you can still reach a lot of people just by writing a slick web application.

One interesting tangent is that a number of folks are considering JavaScript as a compilation target for other languages! (The implication being that JavaScript is so broken that you wouldn’t want to actually write it, but it’s ok as a kind of verbose assembler language for the browser.) The Clojure language has recently spawned ClojureScript, which is a compiler that emits JavaScript so you can run Clojure in your browser, and a new language called CoffeeScript has emerged, which is a sort of “JavaScript, the good parts with a Pythonesque syntax and a nod to Ruby.” It aligns very closely with JavaScript — you can see exactly how your CoffeeScript expressions turn into the equivalent JavaScript — but lets you get away with less syntax. I used CoffeeScript for a short while, and found that the best thing it gave me was an appreciation of how to stick to the good bits of JavaScript. I expect I’ll talk more about why I leapt into CoffeScript — and why I then decided to decaffeinate — in my next article.

And finally…

One thing I keep noticing is that by being this late to the JavaScript party, a lot of the lumpier language-level problems have been solved, and solved well. One example is a library called underscore.js that describes itself as “the tie to go along with jQuery’s tux.” It elegantly provides missing functional idioms like function chaining and the usual suspects of filter, map, reduce, zip and flatten, in a cross-browser and node-compatible way, as well as some useful methods on object hashes like keys, values, forEach and extend (to merge objects) etc. Similarly a library called async provides clean ways of chaining multiply-nested callbacks, managing multiple fork-joins, firing a function only after so many calls, etc.

As I said at the beginning of this article, I don’t remember feeling so empowered by a technology, in terms of being able to deliver full-stack applications, as I am by the combination of HTML 5 and server-side JavaScript. It’s an exciting time to be developing for the browser — and the web — and it’s never been easier.

101 comments

  1. Great article. I’ve been on a journey of discovery with JavaScript and agree with what you are saying. Having said that my first impressions of Dart were positive, not least because of the optional typing which I can actually see that being useful in quite a few dynamic languages.

    On JavaScript there are still two main things that bother me. First is as you say the terribly slow rate of progress, on that front hooray for CoffeeScript.

    The second issue that bothers me is code quality. I’ve not done much with node.js yet but on the client when you dig into the code of some great client side frameworks and the code quickly becomes pretty difficult to follow. Not because its using any JavaScript tricks, but just because its not been written with maintainability in mind. Even on this point though I’m optimistic.

    1. I had the pleasure of sitting next to Alex Russell at the Full Frontal conference recently and asked him what he thought about Dart was, and secondly what he thought about its future. He responded positively to the first question, particularly with respect to optional typing, but said that he disagreed with Dart’s founder that we needed to move away from JavaScript; Alex was busy pushing to include some of the nicer features in the next version of JavaScript so I’m keeping my fingers crossed that he is successful.

      Additionally, a note of caution on CoffeeScript; beautiful as it may be I’ve heard it can be a big pain to debug. Check out Ryan Florence’s experiences here http://t.co/Dhr3UNAm

      1. “Dart’s founder that we needed to move away from JavaScript; Alex was busy pushing to include some of the nicer features in the next version of JavaScript so I’m keeping my fingers crossed that he is successful.”

        If thats what happens then that’d be perfect. Hopefully the effect of Dart/CoffeeScript will be not only to push some of the good ideas back into JavaScript but also give them a big kick up the jacksie to speed them up a bit.

        “Additionally, a note of caution on CoffeeScript; beautiful as it may be I’ve heard it can be a big pain to debug. Check out Ryan Florence’s experiences here http://t.co/Dhr3UNAm

        I’ll have a look at the post but yes the debugging isn’t great and people do tend to under-sell how big a problem it is, on the other side there’s the fact FireFox is planning to address the issue.

        I”

      2. Sorry for second reply but there is no editing, thanks for the link to that article it is indeed excellent. I loved his point about one-liners, I read a Coffee Script book over the last week and was worried by the tendency for the code examples to feel tricksy. To be fair lots of public JavaScript code is like that too but I do share Ryan’s worry about those sorts of one-liners being written by people who honestly believe the result is code thats easier for subsequent readers to deal with.

  2. To continue on the languages compiling to JS is haXe, which is a staticly typed language with the usual classes but also algebraic data types and anonymous types using structual typing, powerful lisp style macros (look like normal code, staticly typed operating on AST’s running in a virtual environment at compile time with IO access etc), javascript like syntax, type inference and also compiles out to c++,php and soon java and c# and a couple other targets

  3. Thanks for a great article.

    “I’ve found it in turns frustrating and enlightening, ridiculous and brilliant”
    Aligns perfectly with my discovery of javascript.

    Looking forward to the coffeescript article.

  4. Third paragraph, second line. “Fightnig”

    1. Fixed. Thanks!

  5. Bruce Onder · ·

    A small but important inaccuracy – Silverlight was designed from inception to be cross-platform, not just a Windows replacement for Flash. Otherwise, an enjoyable summary of the JavaScript ecosystem.

  6. “Now you see that var key at the top of the for loop? That’s not declaring a variable, oh no. It’s saying that somewhere else there’s a variable called key (right at the top of the nearest containing function, it turns out). Oh, and that key variable is visible all through that function, and any functions it contains, not just tucked away in the for loop. Brilliant!”

    This is true of any var definition. It’s called hoisting. All declarations are hoisted up to the top of the current scope. It’s perfectly normal.

    1. I’m pretty sure he understands that but was making a point, namely that hoisting is pretty odd.

  7. I enjoyed the article, but your bit about server side programming is a bit dated. Anybody who does server-side programming and doesn’t do some kind of asynchronous processing is simply following an outdated model. C, C++, and any other server language can provide you with asynchronous processing just fine, and allow for a variety of models with which to achieve it.

    1. I don’t think I was suggesting the single event loop of JavaScript is the only model for asynchronous processing. Just that I find it easier to reason about than thread-based concurrency and that separating slow (I/O-bound) processing from fast in-memory activity seems to scale better in a lot of cases (say thread-per-request vs. evented web servers, the so-called 10k problem of servicing 10,000 concurrent requests).

      1. Henrik Johansson · ·

        But no modern framework uses thread-per-request anymore. They are using actors or csp style paradigms to utilize underlying evented io.

        I think it is wrong to equate an evented programing style with evented IO. The one does not require or exclude the other.

  8. Dan, I would have thought that you of all people would have paid more attention to testing. The support for testing in JS is appalling. Yes you can use JSUnit or QUnit but try using it on any large system and it quickly falls apart. Using Jasmine and PhantomJS (I reserve Selenium for pure browser testing) is better but try integrating it into Jenkins so that it breaks the build, getting useful stats like line and branch coverage. Even sonar only gives you a subset of what you need. It’s just not good enough for a large system. I’m working on a large project right now where our single biggest pain point is JavaScript testability.

    And as for node.js, an evented model is effectively single threaded and therefore single cored. You want to use more cores then you need to spawn worker processes and you are back in the world of threading with all it’s pain and complexity. Why not choose a paradigm like Actors where you can have a single paradigm that deals with both events and threading issues?

    1. Hi Martin, thanks for commenting. I’ll take your two points separately.

      Firstly, with respect to testing I find I’m writing fewer TDD-style tests these days in favour of rapidly iterating and manually testing the components of my apps. I still write tests for places I’m likely to screw up, such as anywhere I transform data structures or unpack someone else’s data (say an inbound IMAP payload). For those I find nodeunit gives me everything I need. When I want to test callbacks, timeouts and other asynchronous behaviour I use the excellent sinon.js to mess around with time. It does other things too, but I mostly use it for that. I’m planning to expand on how I actually code, test, deploy and debug in JavaScript in a follow-up article.

      Secondly a single event loop doesn’t imply single threading or a single core. It simply means there is only one thread that has access to visible program state at any one time. Other threads are taking care of the background work of reading and writing I/O, listening for OS events, etc. Whenever something interesting happens – data being ready on an input stream, or a new client connecting on a socket, say – they notify the main loop and it lets you know by way of your event listeners or callbacks.

      For instance an evented web server can happily be receiving data from thousands of connections concurrently, but each one will have to wait its turn for the main event thread for you to “process” the request. Fortunately this is usually super fast relative to all the I/O going on so you don’t really notice. (Typically you might want to fetch some data to return in the response, but this fetching will happen in the background on a worker thread somewhere, freeing up the main event loop to service the next request.)

      1. Dan, my first point was against JavaScript as a whole. You mention nodeunit and sinon but neither of these are helpful with browser based JavaScript. No modern language or platform is going to be seriously considered unless it comes with a good ecosystem of testing frameworks out of the box and things like nodeunit and mocha means that this is available for serverside focussed js on node but not browser based JavaScript which is the vast majority of js code out there.

        Secondly, maybe I was not clear enough. The event loop model is intrinsically single threaded with regards to the application logic as you yourself mention. This paradigm works well for simple scatter gather tasks which is Node.js’s sweetspot but fails pretty badly when any other sort of concurrency is required. Any easily parallelisable tasks, e.g. map or flatten operations, will be run in sequence and will block all activity for the application.

        “Fortunately this is usually super fast relative to all the I/O going on so you don’t really notice. ” Your use case may vary but I have a feeling that we are going to be seeing the limitations of this paradigm catching people out, especially when an application that starts out not needing much CPU processing gets a requirement for just that.

        Look forward to the next article.

      2. Hello Dan,
        You wrote:
        “Firstly, with respect to testing I find I’m writing fewer TDD-style tests these days in favour of rapidly iterating and manually testing the components of my apps.”

        I applaud your honesty. The whole TDD/BDD thing has been blown out of proportion by the consulting crowd. It is useful; just like any other suite of tools/techniques, but not an end in itself.

        I would like you to write more in detail about using Python more than Ruby since I know you have been a Rubyist in the past. Specifically, what are the developer centric things that you like in Python more than Ruby. I am not that concerned about explaining my code to others, but care about explaining it to myself. I know Ruby I do not know Python.
        Bharat

    2. Check out my JS Test Runner as it may help you test on a large scale. We run our QUnit tests via Maven and JS Test Runner and thus have CI happening. You could easily invoke JS Test Runner from other build scripts: http://js-testrunner.codehaus.org/

  9. Welcome to the party Dan :-)
    It’s amazing that You can have an entire technology stack with just one programming language, and at the same time it’s the worlds best stack for the type of web applications I’m building, both in how easy it is to work with and in how well it performs and scales. And to top it all off, the entire stack is open source and license free. I’m talking about Node.JS and MongoDB of corse.

    There is a pitfall though. It’s pretty hard to create an overall sound architecture, with code that has great structure and remains maintainable with large development teams.We really need more people sharing good proven practices.

    For anyone interested, I’ve written a short post about our architecture using MongDB, Node.JS and a CDN here:
    http://www.muscula.com/architecture

  10. Believe it or not, JavaScript is playing a major role in the spread of artificial intelligence around the world. JavaScript is especially suited for this role because Netizens need only click on a link like http://www.scn.org/~mentifex/Dushka.html to run the AI program that would otherwise involve all manner of complicated set-up and special knowledge.

  11. My current project is a single-page browser app that would probably have been specified as a desktop client until pretty recently. We’re using backbone.js, Underscore, jQuery and Google Closure Templates, which between them give us a really productive environment to work in.

    Agree entirely with Martin’s comment – unit testing is surely the weakest link at the moment. Apparently there are 30 testing libraries available (http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#JavaScript), but they mostly seem to be solving the same narrow set of problems. We ended up writing our own JUnit (with Rhino) test runner to wrap our QUnit tests.

  12. Great article. In terms of quirkiness, don’t forget http://wtfjs.com/ and Crockford’s JavaScript: The Good Parts (which reads more like The Bad Parts). But it’s definitely a workable language and I’ve found CoffeeScript a fantastic productivity booster.

  13. Very well written. Programming was always interesting but writing about it in a interesting way is a challenge and this post has done it. Thanks for sharing.

  14. JavaScript is a fantastic language, but I never again want to write any significant amount of it without a better syntax — that is, without CoffeeScript. The Java-like syntax simply isn’t well suited to the very un-Java-like language. I was with you right up until the point where you decided to “decaffeinate”. With the classic syntax, you can either write readable code or you can write good code — generally not both at once. With CoffeeScript, good idioms become readable because the language encapsulates them.

    Anyone writing straight JavaScript in 2011 is either working a lot harder than he needs to, or he is producing less good code than he should be producing.

    1. I disagree – while I like CoffeeScript’s intent, its execution doesn’t suit me. Particularly, I dislike the de-emphasis of prototypical inheritance in favour of classical inheritance. The Prototypical nature of JavaScript, together with support for closures, are what makes it bearable.

      1. I kind of dislike the deemphasis of prototype inheritance in CoffeeScript too, but that’s only one feature in an otherwise excellent language, and you’re free not to use the classical features if you don’t want to. What makes JS “bearable” is closures and functional programming, not its prototype inheritance.

        (Personally, I agree with JavaScript: The Good Parts in saying that JavaScript’s prototype inheritance is poorly implemented. It needs syntactic sugar on top of it, whether that’s a better prototype layer or classical inheritance.)

    2. “With CoffeeScript, good idioms become readable because the language encapsulates them.”

      I am sure CoffeeScript is a boon for many people. I personally find it intolerable. it’s indentation-based bounding construct creates ambiguity and adds effort in visual parsing for me. On the other hand, braces and literal bounds are clear and easy for me to read. Again – for me. I’m certainly not going to argue that such a popular tool is without value, and helpful to some, but I personally have no problem reading and writing Javascript in it’s native form.

      Braces, parenthesis and operators make perfect sense for me to communicate clearly with a computer, just as speaking English makes perfect sense for me to communicate clearly with another human being. If you’re more comfortable with some substitutions between you and that native language, who am I to argue? But I can assure you that just because you find raw Javascript difficult to read, doesn’t mean it’s inherently less readable, or that those who choose to use it are inherently less productive.

      Javascript is a pretty unusual language for people coming from backgrounds of most other languages, to be sure. But its uniqueness is what makes it so fascinating and fun, too.

      1. “Braces, parenthesis and operators make perfect sense for me to communicate clearly with a computer”

        They do for me in certain languages as well. I wouldn’t want a CoffeeScript-like syntax for C, I think. The problem is that functional programming in JavaScript — in other words, one of the things that JS is best at — needs far too many braces, which makes it a pain to type and read. Compare the syntax of functional constructs in JS with similar constructs in Ruby.

        Also, CoffeeScript hides things that have no legitimate usage, like JS’s horrible abortion of an == operator. That’s what I meant about encapsulating good idioms.

        “Javascript is a pretty unusual language for people coming from backgrounds of most other languages, to be sure. But its uniqueness is what makes it so fascinating and fun, too.”

        Absolutely. Its uniqueness is not well supported by the traditional syntax, which makes it much easier to write JS as if it were an inferior Java or C — which isn’t good JavaScript. CoffeeScript is the same language with a better syntax — a syntax that encourages proper use of the language constructs. That’s why I use it. That’s why I believe every serious JavaScript developer should use it

  15. I’m actually pretty disappointed that things which used to be desktop apps are now being written as browser apps, in JavaScript and other related technologies. The responsiveness of these apps is very poor when compared to a native app. It seems to me that too many developers are forgetting about responsiveness and overall I find a lot of UI experience is becoming worse because of it.

    1. Anonymous coward · ·

      I think you just had bad luck. I was doing both web apps using rich clients written in Javascript and WinForms-based desktop apps using VB.Net or C# over the last few years, and constantly experienced the web apps to have a shorter startup time and similar latency. Of course, in the case of web apps it also depends on hosting – if your server is slow and your bandwidth limited, your app will be sluggish.

      1. Nope. I’ve used a lot of apps and inevitably they all fail in terms of responsiveness. Even take this simple “Reply” button I pressed here — there is a noticeable delay between clicking and having something happen. The load times are also quite aweful everywhere. Try to quickly load up an app? Not a change, wait for it to load thousands of resources from various places and then watch as the control dance happens as the browser draws everything.

        We also aren’t talking about complex interfaces. The fact that a series of basic controls does not have a 0ms response time is aweful. We’ve taken a huge step backwards in this area.

        I use a very recent FireFox version on Linux, so it’s not like I’m just suffering from old technology either.

      2. “Even take this simple “Reply” button”

        So are you saying, if you had a WPF application that connected to a central commenting system over the internet when you clicked a button, that it would be faster?

        This isn’t a question of the technology. It’s a question of the kind of application. You are comparing apples and oranges. I’ve used plenty of god-awful slow client/served applications running on a local network in my time. And I’ve used plenty of snappy web applications, too.

        If this commenting system is slow, it has nothing to do with the nature of web technology:

        http://jsfiddle.net/eEWHu/4/

        Do you find the presentation of the form laggy in this example?

        Connecting to a database has latency. If that database is busy, more so. It doesn’t matter if the client is a WPF application or a web application. You can design web applications to avoid unneeded requests, same as any other client/server application.

      3. What I’m saying would be faster is the controls aspect. There is no reason when I press the “Reply” button there should be any delay: I haven’t written anything yet, thus there is no need to connect to a server. On the page you provide, simply dragging one of the panel sliders shows a significant delay: the border drags noticably behind the mouse cursor.

        Also on that page is the other thing that bothers me, excessive net access. Every button along the top seems to do a server request. This is overhead that doesn’t exist in client-side apps: they are capable of doing things on the local computer, whereas too many web apps have to request everything from the server.

      4. The page that I provided is also an extremely popular playground that’s got a server incapable of handling the demand. The example wasn’t of jsfiddle itself, but of a client-only demo I created within it. was trying to demonstrate that if you write code that doesn’t access the server, then it’s no more or less laggy than a client app that doesn’t try to access the server.

        The fact that all the buttons at the top require net access is a great example of an app that could be designed differently. The “Run” button shouldn’t need net access, and in fact, that’s something frustrating about using jsfiddle because it’s so popular now. On the other hand, obviously the “Save” button should.

        I don’t have any delay on my machine when sliding the size controls. Perhaps you are using an old web browser and/old PC.

    2. I’ve had a similar experience to Anonymous Coward. I’m finding browser-based applications to be at least as responsive as rich desktop apps. The WebSocket and EventSource APIs allow a browser to easily handle many hundreds of server-originated events per second, and with a bit of attention to design you can create a really snappy – and even pre-emptive – user experience.

      Having said that I’m primarily using and targeting Google Chrome and recent Firefox editions on Linux and Windows, so I can’t speak for the performance of IE or Mac browsers. My domain is internal business applications so I haven’t felt the performance or development pain of targeting a complex grid of compatibility targets or deploying a publicly-accessible web application. Hopefully as HTML5 takes hold (which may accelerate now Microsoft is pushing out auto-installing browser updates) compatibility matrix testing will become at least a bit less painful.

  16. […] The rise and rise of JavaScript – Dan North discusses the recent rise in popularity of the JavaScript language, discussing many of the places where it is cropping up these days, discussing the improvement in techniques and ways of working with the language and looking to the future of JavaScript. […]

  17. Your comments relating to C# are out-of-date. Since C# 4.0, we’ve had support for closures which improves things.
    C# 5’s async support takes things one stage further, you’ll write code as if it was synchronous, and the compiler will insert continuations that behave as if you’d written it using async techniques. It just ends up looking less convoluted. This will change the example:

    context = "today";  // a local variable
    file.read(          // ask for some data
      function(data) {           // callback is invoked some time later,
        process(data, context);  // still bound to context
      }
    );
    

    To something like:

    context = "today";
    var data = await file.read();  // Note await keyword
    process(data, context);
    

    That seems like a big step in the right direction, especially when a function may involve multiple async calls.

    1. When I said they were “various kinds of useless” I was suggesting they aren’t all the same. C# 5 may support async processing but the vast majority of C# in the wild is in C# 2, 3 and 4. C# may well be the least worst curly bracket language in this regard, but as Michael Palin said when he was told he was the least funny Python: “that’s a bit like being the least violent of the Kray twins.”

      1. Its still a mischaracterization. C# has had anonymous functions (delegates) with true closures since C# 2. The syntax got nicer in C# 3 and as Rob G pointed out above it gets even nicer again (even nicer than JavaScript, I’d argue) for async operations in C# 5.

        I’m a JavaScript fan and plenty of your other points are valid, but it is just incorrect to state that C# is at some degree of uselessness when it comes to closures and working with async operations.

  18. Great article. Just a correction on Silverlight: It isn’t/wasn’t only compatible with Windows. It ran on OS X as well.

  19. Very informational and entertainning. One of the best articles.

  20. Javascript has evolved indeed, but the combination of html, javascript, css for writing apps, not websites, just feels wrong. All the mix still seems like a spaghetti code. I’ll always choose XAML+c# over that outdated crap…

    1. Love the outdated comment, how wrong can you be :P

    2. Anonymous coward · ·

      Why don’t you choose a framework which abstracts the mess? I’m thinking sproutcore, smartclient, ext, qooxdoo (my personal favorite) and many more, lately. Why I like qooxdoo most? It’s the only framework I know which completely shields you from html and css.

      1. Why is shielding you from HTML and CSS a good thing? Sounds pretty bad to me…

  21. Anonymous coward · ·

    IMO, you give too much credit to jQuery. It comes with a call syntax which I’d say is awkward at best, stemming from something which isn’t Javascript – the html css selectors, is not really useful for web apps, because it has no well integrated library of UI components, but just for highly dynamic web sites, and is not abstracting html or css in any way. It just helped using what was already there, instead of being a complete game change.

    The real revolution was IMO something much more basic: IIRC in 2006 or so XHTML got standardized. Even if IE6 didn’t get it right until much later, it provided an alternative way, compatible in every way except XHR creation. Which is what made jQuery.ajax() possible.

    1. “jQuery is not really useful for web apps because …”
      Wow – hard to believe I’m hearing that! I agree 100% with the author – jQuery was and still is a game changer.
      My only point of difference would be that prototype was in many ways the trail blazer that jQuery based itself on.
      CSS/DOM selectors can be used so elegantly it’s amazing.

    2. jQuery is fantastic for Web apps. You don’t need a library of canned UI components; rather, you need the tools to build them. jQuery makes those tools usable.

  22. “Google has even gone to the trouble of coming up with an entire new language for the browser to try to appeal to the Java- and C#-loving, class-conscious (ahem) programmer.”

    Not to nitpick, but I really need to clear up this common misconception surrounding Google’s efforts to build tools for browser development. You could be referring to one of two different things here, so I’ll address one at a time.

    If you’re referring to tools like Closure and GWT, the primary goals of both of these systems are most emphatically *not* to “protect” developers from Javascript per se (though a secondary goal of both is to help catch common mistakes). Rather, both tools (and their associated languages) were created in order to make it possible to statically analyze client code so that it can be heavily optimized before being sent to the client. Google was one of the first companies to bump up against the performance problems (both due to network latency and parse time on the client) of large Javascript code bases, and these tools became necessary in order to ship efficient apps. Static analysis of raw Javascript is impossible in many cases, and in the limited cases where it *is* possible, it’s extremely expensive for large code bases.

    If, on the other hand, your’e referring to Dart, the answer’s slightly different. While at its core it’s a dynamic language, it has a number of properties that make it much easier to analyze statically than Javascript, which helps serve the same goals described above. It was also designed with the explicit goal of enabling a much more efficient VM, by the people who bear the scars of trying to optimize Javascript in V8 over the years. And while Dart does support a more traditional class structure (very similar to Smalltalk), it is actually quite similar to Javascript in that it allows you to stick with a more “structs, functions, and closures” approach if you so desire.

    It’s tempting to assume that, because these tools and languages differ from Javascript in their semantics, that their semantic differences are their raisons d’être. But you have to consider the entire set of problems they have to solve before jumping to that conclusion.

    1. Hi Joel, thanks for your comments. The phrase “entire new language” is a link that points to http://www.dartlang.org/, so yes I was referring to Dart specifically. (The WordPress theme I’m using doesn’t seem to show links very well.)

      I hope it came across in the article but I have huge respect for what Google has done for web development. In particular pushing for web standards and building Chrome and V8. I saw a talk by Eric Corry, one of the V8 core team, talking about some of the crazy optimisations and runtime heuristics they have built into it (like figuring out whether you’re using an object as an object instance or a map, and switching out its underlying implementation accordingly!).

      However I would much rather see all the design energy that’s gone into Dart applied to evolving JavaScript (deprecating some of the more obscure edge cases, rolling out proper collections and functional idioms so we no longer need underscore.js or sugar.js, making some of the more common asynchronous idioms easier. I Am Not A Language Designer, so I don’t know how viable this approach would be, but my first look at Dart made me a sad panda.

      1. Thanks for following up. I won’t make claims of being a language or VM designer either, but I play one on TV (ok, I just hang out with them occasionally).

        The way I’d characterize the relationship between Javascript and Dart is this — in the interest of doing the “best thing” for the web, we have to try multiple avenues. There are plenty of people at Google working to improve Javascript, both from the language and VM standpoint. And there are others (including the original creators of V8) who fear that we’ve reached the point of diminishing returns on improving the VM, and that the design of Javascript makes it impossible to do much better.

        The reality is that no one can predict the outcome of something this complex, and either approach could turn out to be the best, given all the constraints on the problem. The only way to know if you’re getting closer to a global maximum is to strike out into new design space. Given sufficient resources, I believe the greatest mistake would be to presume that we *can’t* try anything new to fix the very real problems we find ourselves faced with.

  23. […] I’ve been using JavaScript for a while now, but only really programming in anger with it during the last year. I’ve found it in turns frustrating and enlightening, ridiculous and brilliant. I have never felt so empowered by a language and its ecosystem, so I thought I’d take some time to write about why that is. I’m starting with a ramble through the history of JavaScript, or rather my undoubtedly inaccurate understanding of it, to provide some context for where and how I’ve been using it.    Javascript Read the original post on DZone… […]

  24. As a red-headed, step-child, I am mortally offended by this post.

    Other than that. Very good.

    1. Thanks Jamie. Of course I meant no offence – some of my best friends know red-headed step-children :)

  25. Thoroughly enjoyed this article, thanks :)

    I’ve been through the same journey with CoffeeScript; tried it out, liked it, started writing a project with it and then realised it’s just far more prudent (and more comforting) to just write good javascript. I promptly rewrote the project back into JS and swore never to be unfaithful again ;)

  26. Can you guys please take a look at the history of ActionScript without starting to twitch? Like how Adobe added classes to AS1 and made AS2, and how AS3 did the revolution to something better, how that completely invigorated the rich media / online games world. I know it’s not-done to mention Flash in a JS crowd, but it’s a bit shocking how years and years of fast evolution is more or less ignored. (it’s kinda cute how you people are discovering easing animations, the virtue of timelines, nested content and typed languages. :)

  27. […] The rise and rise of JavaScript « DanNorth.net […]

  28. Hi Dan,

    I have a question that is not really related to this article :)

    I remember you’ve recommended 3 books to read in on of your presentations.
    the first one was “pragmatic thing and learning”, the second one was “working effectively with legacy code”

    could you please share with me the title of third book ?

    btw, enjoyed this article,

    Thanks

    1. Hi Mohammad,

      I often recommend books in my talks, and the recommendations vary with the talks. In particular I often recommend those two, because they are both great books! Can you remember which talk it was?

      1. Hi Dan,

        your talk was : Patterns of Effective Delivery

        here is a link for it : http://vimeo.com/24681032

        the book name was mentioned in time 38 – 39th of this talk. The book is about “Theory of Constraints” as you mentioned.

      2. In that case it was probably The Goal by Eli Goldratt. It’s one of the best management and organisational books I’ve ever read.

  29. “After about 100 years of IE6 they released IE7, IE8 and IE9″…

    That made my day, I died laughing. Thanks :)

  30. If only JavaScript/ECMAScript supported require/import statements…

    1. Of all the shortcomings of js, the lack of require/import as a native construct is pretty much of a non-issue. It can be coded with few lines to something as simple as

      require(‘script.js’);

      The details of any implementation will depend on your environment (web? server? namespaced?) and possibly some framework, anyway.

      1. Sorry, posted in the wrong place. Please take a look at my next comment… :(

  31. Exactly Jamie. Something so simple to be implemented in the language itself without any shortcoming will require some framework for dealing with it (LabJS, RequireJS, etc) also meaning more bandwidth usage.

    Minimizer and concatenation tools would also be able to extract what to do by just looking at the require statements. And there would be no need for such frameworks like LabJS or RequireJS as the browsers would be able to properly optimize the scripts downloads or make the options available in the browser’s options interface. The defaults could be different depending on the environment the browser is running.

    I find this lack of specific statement really the worst shortcoming of JS.

  32. My point is, Javascript was basically thrown together as a feature for Netscape Navigator 15 years ago. Many of the ways people use includes (and many other features, e.g. XHR) weren’t even possible or imagined when it was designed, The web and web browsers were wholly different beasts. Javascript at its core is essentially unchanged since then.

    The amount of code needed for a basic “require” function is minimal and far less even than that needed to implement basic cross-browser functionality. IE6, which is newer than Javascript, doesn’t support “string.Trim.”

    You seem to wish that JS didn’t need frameworks to do everything. Sadly, that is not the case. Maybe in a decade things will be better. In the meantime, the need for clients to download frameworks is not that big a deal. A typical JS framework is probably smaller than a single image appearing on a typical web site, and if it’s something like jQuery, it’s already cached by the client. The fact that this isn’t a big deal should be evident by the many awesome web sites out there.

    1. You didn’t get my point. require/import is common to all other programming languages I’ve worked with, even if it is implemented as a macro, like in C/C++. This should be implemented in the language level, not with some framework, since this is a common requirement for code organization.

      Dealing with DOM, AJAX, etc is browser-specific and it is ok to use external libraries/frameworks for dealing with JS in browsers. But code organization is an environment-agnostic common requirement.

  33. I find it amazing that anyone could suggest that the lack of a formal module and dependency system is a non-issue in Javascript, especially when you take the browser into account. There are several serious shortcomings with existing systems that are next to impossible to fix without changing the runtime system:

    – There’s no good way to avoid order-dependency in initialization performed in imports.
    – If you get the order wrong, things just break in obscure ways.
    – Nothing like the require(‘foo.js’) mechanism used by Node and other systems can be implemented sanely in the browser.
    – To implement precisely that approach, you need to use synchronous HTTP requests, which block the UI thread *and* may well be removed from future browsers (because they’re such an incredibly bad idea on the main thread).
    – You could switch to a callback-based approach, which is both extraordinarily awkward, and different from the mechanism used on the server, making shared client/server code difficult.
    – There’s an inherent tension between what you want at development-time (fine-grained modules), and what you want to deploy (the minimum number of HTTP requests).

    The closest thing to a sane approach I’ve seen, at least for client code, is used by the Closure compiler. goog.require() calls (which are formally required to be top-level statements, making them semantically equivalent to declarations, not function calls) are understood by the compiler for optimizing the deployed artifact. But it’s still awkward, because in an HTML page you have to separate your requires statements from your “entry-point” code (to avoid the asynchrony problem), you still have the dependency-order problem, and there are limits to what you can do in top-level statements (probably a good idea in general, but it can be tough to know what’s legal). To make matters worse, with a large enough code base you can still end up with really long startup times when developing directly in the browser (i.e., without running the compiler) because of the massive number of HTTP requests (even “requests” to the local filesystem eventually become slow because of all the infrastructure involved).

    I did some digging a few weeks back, trying to find some “acceptably standard” way to deal with Javascript dependencies across server and client code using Node, and was quite surprised at how little there was to work with. There seemed to be a few half-finished projects that tried to unify the client-side and server-side mechanisms (mostly by preprocessing the code served to clients to eliminate the synchronous require() calls), but nothing that seemed to work in practice. If anyone’s aware of a system that actually works, and is in common use, please chime in!

  34. Wow. I had no idea that people struggled so much with dependencies in Javascript. I have only been developing in JS actively for about 18 months or so and this was one of the first things I dealt with as part of getting myself set up with a useful toolbox.

    Here’s how I do it. Bear in mind that I am sure there are established, full-fledged tools which do exactly the same thing, but it was so simple to set up a framework that worked just fine, I never bothered to research it extensively.

    1) Use namespaces and modules.

    2) Create a function “requires” which accepts a CSV or array or whatever you want that identifies the dependencies for a module.

    3) Name your scripts the same as your namespaces, e.g. “jamie.app.module1” would be “jamie.app.module1.js”.

    4) In the constructor for each module, simply call:

    requires(“jquery, jamie.app.toolbox”, function() {
    … normal module init
    });

    My “requires” implementation knows about external scripts (like jquery) from a config section, and anything else is assumed to be named the same as the namespace. It simply checks for the existence of the namespace, and if not present, loads the script. Order does not matter. Scripts will never be loaded more than once because if the namespace already exists, it just returns.

    It’s really pretty darn easy.

    1. Oh – the one downside to loading scripts this way is debugging is a little trickier (e.g. in chrome, they don’t appear in the list of loaded scripts so you can’t directly set breakpoints).

      To deal with this, just conditionally include everything that could appear on a page or site area in your site in development code or debug mode (however your server-side programming environment deals with this). So when developing, everything will be loaded. The “requires” does nothing in this case, since the scripts are already loaded (and the namespace defined). In your production environment, don’t include anything on each page except stuff that every page uses (e.g. jquery) and let “requires” load what it needs.

      1. Yeah, Jamie, pretty simple. I don’t have any idea why I ever wanted a require statement in the JS language. Completely unneeded…

      2. Rodrigo, all I said was that among javascript’s shortcomings, this one is not a big deal. I would love it if javascript had all the functionality of a server-side, continuously evolving framework like asp.net or something. It doesn’t. It never will. It is what it is. It’s a language that has defined a standard as it evolves and needs to work in a whole bunch of environments developed by completely different vendors.

        I’ve just showed you an easy way to deal with this, take it or leave it. Writing a function that does nothing more than “if (!(namespace exists)) { load a script of the same name }” is easy. There are much bigger issues with javascript that cannot be worked around with a few lines of code.

  35. Jamie, here is why I consider it the most important one. Most of the JS shortcomings (in my opinion, of course) are handled by CoffeeScript. So, while using CS I can write less bloated code, but CS can’t work around the lack of a require statement issue since it needs to compile to JS in the end.

    About the require/namespace issue, I would rather use LabJS, RequireJS, YepNope or any other framework like that instead of developing my own solution. There are lots of edge cases while dealing with require in JS. Your dependencies may depend in other JS files as well and there could even be circular dependencies. I won’t list all situations that must be handled by all those libraries, but this certainly is not as trivial as you might expect it to be. It is far from trivial actually, and you’ll find lots of documentation about this subject from the implementors of such library in the web on how they achieve the require feature in JS.

    1. So which is it, a coffeescript shortcoming or a Javascript shortcoming? Your initial comment had nothing to do with Coffeescript.

      Why do you object to using existing solutions like RequireJS if you have no interest in rolling your own? I just find this difficult to understand as such a showstopper of a problem. It’s not hard to code your own solution (if you are so inclined). Alternatively, as you have pointed out, there are plenty of existing solutions, too. JS has lots of other problems, but you seem to be OK with using Coffeescript to deal with those… is there something wrong with RequireJS? Do you use jQuery, or any other framework? Or are you trying to write code that has no external dependencies? I just don’t get it

      1. I’m not against using any existing solution for this. I would rather prefer not to have to use such solution if JS supported this in the first place.

        About the lack of require statement, it is JS fault, not CS. How could CS possibly implement such statement if it has to compile to JS?

        I’ve only mentioned CS to state that most of the other shortcomings from JS can be handled somehow by another language targetting JS, but they are not able to compile to pure JS and support such statements without requiring you to include other JS files in your pages. That was my argument why I find that this specific shortcoming is the biggest one in JS.

        I do use jQuery, Knockout and many other dependencies and I do program in both JS and CS regularly. I just find that dealing with dependencies in JS shouldn’t require us any additional library just for that. I’ve just commented in the main article, where JS is presented as a great solution, while I find it is actually widely used due to a lack of alternatives rather than being a good option… I would certainly do my client-side code in Ruby if all browsers supported Ruby in the client side…

        But I wouldn’t hate JS so much if it just supported require/import statements.

  36. OK, well, I guess we each have our own thorns in our sides. Having one more function/module on top of the dozens I already use in any given app seems like, well, part of life with Javascript to me…

  37. Nice to see more people embracing JavaScript – being honest about it’s flaws but not claiming the sky is falling because of it. That said I’m not sure jQuery was the game changer for the client side … it’s certainly a major factor in making it useful but it grew out of a legacy of other JS libraries like scriptaculous and moo

    I’m starting to explore node.js but – even with a decade of JS under my belt – do sometimes find the way it works a little strange, but I think it’s worth persevering. Really interesting to see where it goes now it’s available on Windows Azure… wonder if it’ll appear on AppEngine as well?

    Debugging js has proven to be an interesting challenge, especially out in the wild, which is why for a couple of my projects I built a framework to help manage that that’s now available more widely (open source and free) at http://jsErrLog.AppSpot.com (always happy to have new users or contributors to the github project)

    It’s going to be interesting to see how JS continues to grow over time – with it being a first class citizen in Windows8 and the existence of good browser implementations on all the mobile platforms it offers a good alternative to restrictive app store models or coupled with wrappers like PhoneGap gives you a great way to target multiple devices with a common code base…

    I would take a small exception to the Silverlight comment – it’s very much cross platform working on all major browsers on both Windows and OSX (and with the Moonlight fork going strong on Linux as well) and while it seems to be coming to the end of it’s current form it’s still being supported for the next decade so where it makes sense to use it then it’s still a very viable choice.

    1. > Debugging js has proven to be an interesting challenge, especially out in the wild,
      There is also:

      http://muscula.com (commercial product in private beta – I’m part of it, Logs errors on iPad, iPhone etc.)

      http://proxino.com (commercial, expensive and runs through a proxy server)

      http://www.erroralerts.com/ (free)

      http://damnit.jupiterit.com/ (free)

  38. […] The rise and rise of JavaScript « DanNorth.net. […]

  39. JavaScript should implement proper OOP, otherwise it is garbage.
    Try to working with different frameworks when everybody having own ideas about OOP in JavaScript and you will see how painful it is. Try to create big project without using OPP just prototyping in JavaScript you will see how hard to maintain it and extend by other developers.

    1. I also find OO programming to be more maintainable, and this is one more reason for me to adopt CoffeeScript. It provides a common syntax for OOP so that you wouldn’t be fighting with other developers to decide what is the best way of implementing OOP in JS…

  40. Nice article indeed. Since it is so popular these days with languages compiling to js I thought I should mention Amber:

    http://www.amber-lang.net

    It not only compiles to js and is a full Smalltalk, it also has an IDE running directly in the browser which is what makes it unique I think. And interacting with js and js libraries is very transparent. And it also has a command line compiler so you can easily use it to build node apps etc, examples are in the git clone. If you find it interesting, do show up on #amber-lang at freenode.

    cheers, Göran

  41. Sergio Kas · ·

    And don’t forget underscore.string (http://epeli.github.com/underscore.string/) which addresses one of the missing JavaScript features that I miss the most: proper string manipulation.

  42. Dan,

    What about more conventional web sites with larger pages sizes using tempting? From my own tests, it seems that, even if an API call to get data is asynchronous, the parsing of the data and rendering of the template is still enough work to dramatically affect throughput.

    Yes I understand that work can be passed to handed off to another thread but then how much are you gaining for having to programatically manage threads? From my tests, in this scenario, it’s dramatically faster to use even 10+ year old JSPs with a reasonable size thread pool and let the OS and Processor balance the workload. Throw into the mix some non-blocking IO with Netty and some Actor pattern to instruct the workers threadpool and it leaves Node.js in the dust.

    Can’t argue against people just wanting to use JavaScript only.

    Sorry to bust your chops but seriously…
    When would you not use Node.js?

    Tim

    1. Hi Tim,

      Funny enough I think the use case you’re describing, a site with lots of large templates, is exactly in the sweet spot of node.js, especially with web frameworks like Express that have built-in templating.

      One of the problems with the servlet model is you are constrained to one thread per request, which leaves you susceptible to the C10k problem. You simply can’t scale linearly with that many concurrent requests without hosing your thread management complexity.

      If it takes more than a few milliseconds to populate data into a template (and remember, retrieving the data and retrieving the template are happening off the main event loop) you’re already in a whole different kind of trouble. Also if you’re looking at these kinds of loads on a large template web site, you should be looking to standard web caching strategies (expiry headers and etags) to make page fragments less expensive to co-ordinate. In this day and age I would think twice before designing a high load, high throughput site using a thread-per-request model.

      To your second question, I try to consider the ecosystem I’m designing for. Who will be maintaining this codebase after me (or as well as me)? Also, what are the integration points for this application, and what are the best technologies for integrating with those? For instance, robust support for protocols like IMAP, SMTP and LDAP are relatively recent in node.js, so before then I would probably have considered Python (either CherryPy for a regular web app or Tornado for an evented stack where most of the work is I/O bound) or even good old Java. But in the latter case I would take a modern evented web server like Webbit rather than suffer the bloat and thread-per-request model of a traditional servlet container.

      1. OK but how do I reconcile that with my test results. And I’m not arguing against the patterns, just the Node.js implementation as having a monopoly on the goodness. I’m getting 3 times the throughput with Netty v Node. Who cares if the runtime is the JVM?

        Tim

    2. I’m not criticising the performance of the JVM. Far from it: these days the JVM is bonkers fast. Also netty is another evented web server so it doesn’t have the same limitations of a traditional servlet container. Your coding model for netty will be similar to that of node: you respond to inbound events (requests) and ask your 10 year old JSP – which is effectively just a big print function – to produce your output.

      I don’t know your specific app, but I’d be surprised if you couldn’t get the JavaScript and Java versions a lot closer in performance by understanding more about node.js and JavaScript’s performance characteristics. It’s extremely easy to write poorly-performing JavaScript, and not too hard to speed it up by orders of magnitude.

      Eric Corry, one of the core V8 developers, did a talk at JAOO a couple of years ago describing some of its optimisations and how to avoid writing poorly-performing JavaScript. It might even be online somewhere.

  43. […] Of course I am only repeating what others are preaching about the recent rise of JavaScript. […]

  44. You are SPOT ON. Excellent work! Amazing how persistent some culture-lag can be.

  45. […] Of course I am only repeating what others are preaching about the recent rise of JavaScript. […]

  46. Excellent history! Maybe discuss how moving the battle from the ghetto to github was another game changer.

  47. great summary, dan!

%d bloggers like this: