JavaScript Performance

January 13, 2012 10:09 pm | 20 Comments

Last night I spoke at the San Francisco JavaScript Meetup. I gave a brand new talk called JavaScript Performance that focuses on script loading and async snippets. The snippet example I chose was the Google Analytics async snippet. The script-loading part of that snippet is only six lines, but a lot of thought and testing went into it. It’s a great prototype to use if you’re creating your own async snippet. I’ll tweet if/when the video of my talk comes out, but in the meantime the slides (Slideshare, pptx) do a good job of relaying the information.

There are two new data points from the presentation that I want to call out in this blog post.

Impact of JavaScript

The presentation starts by suggesting that JavaScript is typically the #1 place to look for making a website faster. My anecdotal experience supports this hypothesis, but I wanted to try to do some quantitative verification. As often happens, I turned to WebPagetest.

I wanted to test the Alexa Top 100 URLs with and without JavaScript. To load these sites withOUT JavaScript I used WebPagetest’s “block” feature. I entered “.js” which tells WebPagetest to ignore every HTTP request with a URL that contains that string. Each website was loaded three times and the median page load time was recorded. I then found the median of all these median page load times.

The median page load with JavaScript is 3.65 seconds. Without JavaScript the page load time drops to 2.487 seconds – a 31% decrease. (Here’s the data in WebPagetest: with JavaScript, without JavaScript.) It’s not a perfect analysis: Some script URLs don’t contain “.js” and inline script blocks are still executed. I think this is a good approximation and I hope to do further experiments to corroborate this finding.

Async Execution Order & Onload

The other new infobyte has to do with the async=true line from the GA async snippet. The purpose of this line is to cause the ga.js script to not block other async scripts from being executed. It turns out that some browsers preserve the execution order of scripts loaded using the insertBefore technique, which is the technique used in the GA snippet:

var ga = document.createElement(‘script’);
ga.type = ‘text/javascript’;
ga.async = true;
ga.src = (‘https:’ == document.location.protocol ? ‘https://ssl’ : ‘http://www’) + ‘.google-analytics.com/ga.js’;
var s = document.getElementsByTagName(‘script’)[0];
s.parentNode.insertBefore(ga, s);

Preserving execution order of async scripts makes the page slower. If the first async script takes a long time to download, all the other async scripts are blocked from executing, even if they download sooner. Executing async scripts immediately as they’re downloaded results in a faster page load time. I knew old versions of Firefox had this issue, and setting async=true fixed the problem. But I wanted to see if any other browsers also preserved execution order of async scripts loaded this way, and whether setting async=true worked.

To answer these questions I created a Browserscope user test called Async Script Execution Order. I tweeted the test URL and got 348 results from 60+ different browsers. (Thanks to all the people that ran the test! I still need results from more mobile browsers so please run the test if you have a browser that’s not covered.) Here’s a snapshot of the results:

The second column shows the results of loading two async scripts with the insertBefore pattern AND setting async=true. The third column shows the results if async is NOT set to true. Green means the scripts execute immediately (good) and red indicates that execution order is preserved (bad).

The results show that Firefox 3.6, OmniWeb 622, and all versions of Opera preserve execution order. Setting async=true successfully makes the async scripts execute immediately in Firefox 3.6 and OmniWeb 622, but not in Opera. Although this fix only applies to a few browsers, its small cost makes it worthwhile. Also, if we get results for more mobile browsers I would expect to find a few more places where the fix is necessary.

I also tested whether these insertBefore-style async scripts block the onload event. The results, shown in the fourth column, are mixed if we include older browsers, but we see that newer browsers generally block the onload event when loading these async scripts – this is true in Android, Chrome, Firefox, iOS, Opera, Safari, and IE 10. This is useful to know if you wonder why you’re still seeing long page load times even after adopting async script loading. It also means that code in your onload handler can’t reliably assume async scripts are loaded because of the many browsers out there that do not block the onload event, including IE 6-9.

And a final shout out to the awesomeness of the Open Source community that makes tools like WebPagetest and Browserscope available – thanks Pat and Lindsey!

20 Responses to JavaScript Performance

  1. Interesting stats Steve. I see some problems though. It looks like there may have been connectivity issues for some of the sites. For example, Imgur had a median load time of 45.134s in the JS test and 4.551s without. I wonder if these hiccups skewed the results

  2. Setting async=true successfully makes the async scripts execute immediately in Firefox 3.6 and OmniWeb 622, but not in Opera. Although this fix only applies to a few browsers, its small cost makes it worthwhile.

    I disagree it’s worthwhile in all situations — it really depends IMHO. The .async = true part is only worth adding if all of these conditions are met:

    You load more than one script asynchronously by dynamically inserting it into the document.
    You still care about getting the best possible performance in Firefox 3.6.
    You support OmniWeb 622. (I sure don’t.)

    Even in cases where the first bullet point applies, the last two bullet points don’t apply to me anno 2012.

    For what it’s worth, Kyle Simpson told me about this behavior back in 2010 and I documented it in my write-up on the optimized Google Analytics snippet. We didn’t know about OmniWeb though — but luckily, that’s not a browser I have to support.

  3. @Dan: That looks like an issue with WebPagetest determining when the page is “done” because there’s some lazy loaded resources. I used median instead of average so this doesn’t affect the stats.

  4. @Mathias: For the sake of a few more characters I think it’s worthwhile to support these browsers that are known to need the fix, plus the handful of others that we don’t know about (yet). Also, having async=true might bring other benefits as browsers get smarter about supporting the async attribute. But I totally agree with you – Firefox 3.6 and OmniWeb are not major browsers.

  5. @Steve: My point is, if you don’t even test your website in these browsers (I’d think most people don’t) to see if the CSS and JS behave as expected, there’s no point in blindly adding the .async = true.

    Thanks for digging into this!

  6. Steve:

    having async=true might bring other benefits as browsers get smarter about supporting the async attribute.

    Could you elaborate on this, please? If I understand the current HTML spec correctly, all dynamically inserted scripts default to .async = true. (This has been implemented in all major browsers.)

  7. About the “Async Execution Order & Onload” tests, I wonder what you consider to be the normal (best) behaviour for the fouth column? For me, I think that a script loaded asynchronously should not block onload event in any way. Why would you want to load a script asynchronously while having it blocking onload event?

    Isn’t IE6-9 having the best behaviour on that (just this once won’t hurt).

    Thanks for the good infos!

  8. @Mathias: The bar is higher for someone creating a 3rd party snippet that will be used on a wider variety of pages and browsers than can be feasibly tested. The point is that following a pattern from a widely popular snippet like GA is a good move and not characterized as “blindly” adding functionality. Having a good understanding of why each line is in the snippet is valuable – that’s the goal of this post. On your second point, it is true that .async=true for async scripts in newer browsers, but not in those edge case browsers – which is why setting it explicitly solves the problem. Going forward it might be the case that letting the browser set it during it’s processing of the async snippet is the same as setting it explicitly – we’ll have to see.

    @happypoulp: There are use cases that argue for both behaviors. As you point out – it might be nicer for onload to fire without waiting for async scripts to record a faster onload time. On the other hand, if I’m lazy loading more scripts in the onload handler I might not want them to contend with the async scripts in the body for execution time, so blocking onload is better. I don’t think there’s a clear winner. Having consistent behavior across browsers is the most preferred solution.

  9. Great article, Steve, and great slides.

    While my experience also shows JS is likely the biggest performance hog on a page, I think the test above is too simplistic.

    If you look at the average page size with and without JS, you’ll see removing JS reduced the average size from ~550K to ~330K, and the number of requests from 53 to 37.

    This means the time delta can be easily be explained by the reduced page size, regardless of whether it was JS or images that were removed. Would be worth doing some statistical analysis of how JS requests correlate to load time compared to other parts, or at the very least repeat the same JS test with blocking CSS and images for comparison.

  10. In playing heavily with webpagetest and getting to about the 90% I found that the next easiest thing to do to optimize was to only use jquery on the pages that need it. This may sound super obvious but many times sites include js libs site wide, and just narrowing each page to script that is necessary, it’s a noticeable perf change when you consider it site wide and in the aggregate.

    This gained 5% more, and pages that window.onload in 200-500ms.

  11. @Steve:

    The point is that following a pattern from a widely popular snippet like GA is a good move and not characterized as “blindly” adding functionality

    Google’s GA snippet is definitely a good place to start, but I’d prefer to see more developers analyze third party snippets (in general) thoroughly before including them as-is rather than blindly copy-pasting. Even when a snippet comes from a trusted authority on web performance.

    Having a good understanding of why each line is in the snippet is valuable – that’s the goal of this post.

    Then we’re in agreement :) It’s important to fully understand what each line in the snippet does. However, the current GA snippet does contain some redundant stuff. What does ga.type = ‘text/javascript’; do, for example? Are there any edge case browsers that actually need this line? I don’t know of any, but perhaps the Google Analytics JS team does?

  12. Steve – Great post. I read it with interest, not as an engineer, but as someone that works with online publishers who are frequently removing js so as to speed up their sites. Like many publisher tools providers, my company’s mode of deployment involves publishers adding js to their sites.

    Publishers are at war to shave milliseconds off load times – an effort driven by Google’s statements that slower sites will get less SEO juice. Script load time is one of the top questions we are asked.

    Given that Google is a producer of js, I’m troubled by the conflict of Google making the (ambiguous) rules AND benefiting from them. How fast is fast enough? Google doesn’t advocate the removal of Google Analytics’ js, obviously.

    I’d like to believe that Google’s focus in this area is to penalize “slow” sites that load in 10 seconds or more, and _not_ sites that load in 3.65 seconds instead of 2.487 (I’m using your numbers from above, but don’t mean to imply that you’re saying 2.487 is ideal or that all js should be removed to optimize for SEO).

    Yes, a 31% improvement in load time is huge. But are we being unnecessarily obsessive below 3 seconds?

    Any clarity or references to other posts would greatly be appreciated.

  13. @Jim: There are many case studies that show the negative impact slow web pages have on business metrics and the user experience. Even 400ms can make a significant difference.

  14. @Jim glad to see you joining the performance discussion ;)

    I’d love to chat about business reasoning behind all of the performance stuff – come over to our meetup or let’s just grab lunch some day ;)

  15. @Dan: The results for Imgur are accurate. That long page load time (45 seconds) is from a request to rlcdn.com that redirects to a script that does document.write for a tracking pixel. The request fails but takes 30 seconds to timeout.

  16. Just a thought that regularly comes to mind when I hear people talking about async and defer in scripts. Wouldn’t it be easier for CDNS to adopt CORS headers, and start loading scripts via XHR and eval? With CORS + eval / script injection you get absolute control over when to request and when to execute.

    It seems it would be far easier and faster to expect Javascript serving CDNs / hosts to update their headers than wait till the populace has updated their browsers, and has quasi-support down to IE8.

    I usually don’t get much involved in these conversations because I always more or less assume I’ll never use defer/async b/c of this (though obviously I may use a snippet that does).

  17. What about the impact of images? Apart from the fact you’re not really proving the “Javascript being the #1 place to look for making a website faster” statement, I wanted to put this to the test.

    As a follow-up to our conversation on Twitter (https://twitter.com/#!/souders/status/158070791659986945), here are my numbers:

    With images: 3.3965 sec.
    Without images: 2.275 sec. This is a 33% decrease.

    It must be noted that the blocking rule was just “.png .gif .jpg .jpeg”, which does not eliminate favicons, server-side generated images (often seen with a .php extension), and of course Flash.

    With images: http://www.webpagetest.org/result/120118_K6_4a2273c148243934ef3064a87edc7fd1/
    Without images: http://www.webpagetest.org/result/120118_HD_e41457aff4811b43c72590f8ab42bc10/

    Since all of our measurements are too flaky imho; and definitely do Javascript and images require a whole different approach when optimizing, I’m not saying that images are the #1 place to look for now. But at least it deserves the same kind of attention.

    (Btw, your numbers would be 3.61 and 2.481 seconds respectively, but the 31% increase would remain the same. Here’s using the Numbers app)

  18. @Adam: Loading all scripts using CORS as XHR might turn out to be faster, but it’s certainly not easier.

    @Lars: Cool test, thanks. The reason JavaScript is a bigger problem is that it blocks rendering in most cases. Reducing JavaScript impact therefore makes rendering happen sooner, whereas reducing images has less effect on render time. For example, in your tests for CNN.com the start render time with images is 1.841s vs 1.855s without images – not much difference. For CNN.com the start render time with JS is 1.811s vs 0.894s without JS – a significant speedup of rendering. This was the first URL I chose, so I wasn’t hunting for a winning example. If you had time to pull down the csv for all four results and compare median start render times that would be a more conclusive comparison. I’m betting the removal of JS helps rendering much more than images.

  19. Steve, I completely agree. I just think it is important to make clear what exactly you are talking about. I think we can agree that JS has major impact on the *perceived* performance (significantly more than images); while images and JS roughly have the same impact on page load time:

    Render time median for JS:
    With: 1387.5
    Without: 1069 (decrease 23%)

    Render time median for images:
    With: 1343.5
    Without: 1334 (decrease <1%)

    We should be careful with those numbers, I think we can do better (e.g. also not execute inline JS, generated images are even more expensive, etc.). But they do give a nice impression (more or less confirming our expectations).

    Thanks for the article and bringing this to our attention.

    Many thanks also to Pat Meenan from http://webpagetest.org for allowing us to do our tests so easily!

  20. Steve, I’m working in a little tiny loader for javascript sources using CORS and make the web more fast !

    The goal is load multiples js in parallel and execute in order without blocking DOMReady or onload.

    https://github.com/pablomoretti/jcors-loader