Render first. JS second.

September 30, 2010 11:10 am | 28 Comments

Let me start with the takeaway point:

The key to creating a fast user experience in today’s web sites is to render the page as quickly as possible. To achieve this JavaScript loading and execution has to be deferred.

I’m in the middle of several big projects so my blogging rate is down. But I got an email today about asynchronous JavaScript loading and execution. I started to type up my lengthy response and remembered one of those tips for being more productive: “type shorter emails – no one reads long emails anyway”. That just doesn’t resonate with me. I like typing long emails. I love going into the details. But, I agree that an email response that only a few people might read is not the best investment of time. So I’m writing up my response here.

It took me months to research and write the “Loading Scripts Without Blocking” chapter from Even Faster Web Sites. Months for a single chapter! I wasn’t the first person to do async script loading – I noticed it on MSN way before I started that chapter – but that work paid off. There has been more research on async script loading from folks like Google, Facebook and Meebo. Most JavaScript frameworks have async script loading features – two examples are YUI and LABjs. And 8 of today’s Alexa Top 10 US sites use advanced techniques to load scripts without blocking: Google, Facebook, Yahoo!, YouTube, Amazon, Twitter, Craigslist(!), and Bing. Yay!

The downside is – although web sites are doing a better job of downloading scripts without blocking, once those scripts arrive their execution still blocks the page from rendering. Getting the content in front of the user as quickly as possible is the goal. If asynchronous scripts arrive while the page is loading, the browser has to stop rendering in order to parse and execute those scripts. This is the biggest obstacle to creating a fast user experience. I don’t have scientific results that I can cite to substantiate this claim (that’s part of the big projects I’m working on). But anyone who disables JavaScript in their browser can attest that sites feel twice as fast.

My #1 goal right now is to figure out ways that web sites can defer all JavaScript execution until after the page has rendered. Achieving this goal is going to involve advances from multiple camps – changes to browsers, new web development techniques, and new pieces of infrastructure. I’ve been talking this up for a year or so. When I mention this idea these are the typical arguments I hear for why this won’t work:

JavaScript execution is too complex
People point out that: “JavaScript is a powerful language and developers use it in weird, unexpected ways. Eval and document.write create unexpected dependencies. Blanketedly delaying all JavaScript execution is going to break too many web sites.”

In response to this argument I point to Opera’s Delayed Script Execution feature. I encourage you to turn it on, surf around, and  try to find a site that breaks. Even sites like Gmail and Facebook work! I’m sure there are some sites that have problems (perhaps that’s why this feature is off by default). But if some sites do have problems, how many sites are we talking about? And what’s the severity of the problems? We definitely don’t want errors, rendering problems, or loss of ad revenue. Even though Opera has had this feature for over two years (!), I haven’t heard much discussion about it. Imagine what could happen if significant resources focused on this problem.

the page content is actually generated by JS execution
Yes, this happens too much IMO. The typical logic is: “We’re building an Ajax app so the code to create and manipulate DOM elements has to be written in JavaScript. We could also write that logic on the serverside (in C++, Java, PHP, Python, etc.), but then we’re maintaining two code bases. Instead, we download everything as JSON and create the DOM in JavaScript on the client – even for the initial page view.”
If this is the architecture for your web app, then it’s true – you’re going to have to download and execute (a boatload) of JavaScript before the user can see the page. “But it’s only for the first page view!” The first page view is the most important one. “People start our web app and then leave it running all day, so they only incur the initial page load once.” Really? Have you verified this? In my experience, users frequently close their tab or browser, or even reboot. “But then at least they have the scripts in their cache.” Even so, the scripts still have to be executed and they’re probably going to have to refetch the JSON responses that include data that could have changed.
Luckily, there’s a strong movement toward server-side JavaScript – see Doug Crockford’s Loopage talk and node.js. This would allow that JavaScript code to run on the server, render the DOM, and serve it as HTML to the browser so that it renders quickly without needing JavaScript. The scripts needed for subsequent dynamic Ajaxy behavior can be downloaded lazily after the page has rendered. I expect we’ll see more solutions to address this particular problem of serving the entire page as HTML, even parts that are rendered using JavaScript
doing this actually makes the page slower
The crux of this argument centers around how you define “slower”. The old school way of measuring performance is to look at how long it takes for the window’s load event to fire. (It’s so cool to call onload “old school”, but a whole series of blog posts on better ways to measure performance is called for.)
It is true that delaying script execution could result in larger onload times. Although delaying scripts until after onload is one solution to this problem, it’s more important to talk about performance yardsticks. These days I focus on how quickly the critical page content renders. Many of Amazon’s pages, for example, have a lot of content and consequently can have a large onload time, but they do an amazing job of getting the content above-the-fold to render quickly.
WebPagetest.org‘s video comparison capabilities help hammer this home. Take a look at this video of Techcrunch, Mashable, and a few other tech news sites loading. This forces you to look at it from the user’s perspective. It doesn’t really matter when the load event fired, what matters is when you can see the page. Reducing render time is more in tune with creating a faster user experience, and delaying JavaScript is key to reducing render time.

What are the next steps?

  • Browsers should look at Opera’s behavior and implement the SCRIPT ASYNC and DEFER attributes.
  • Developers should adopt asynchronous script loading techniques and avoid rendering the initial page view with JavaScript on the client.
  • Third party snippet providers, most notably ads, need to move away from document.write.
Rendering first and executing JavaScript second is the key to a faster Web.

28 Responses to Render first. JS second.

  1. Great article as always, and very timely.

    But I’m concerned that you’ve glossed over a really important point when you start “deferring” JavaScript by a noticeable amount on pages. I call it FUBC (Flash of Un-Behaviored Content). It’s a different spin on the more well known FOUC regarding content displaying before CSS is present to make it look pretty.

    FUBC becomes a much more noticeable problem when you have content (and CSS) that is displaying before the JavaScript arrives. It evidences itself in two main ways:

    1. content that has the web 1.0’ish fallback functionality (like form posts in lieu of ajax), such that if you interact with the page too quickly, you’re getting the degraded experience. This is especially jarring if you’re already typing into a text box and all of a sudden JavaScript arrives and takes over the box and redoes it, etc.

    Facebook has Primer, where they define some default (JavaScript-driven) fallback behaviors for all (or most) of their page widgets. This is a good approach (loading javascript functionality in layers)… but the main key there is that Primer is loaded synchronously/blocking, so it’s guaranteed to be there before a user can start interacting.

    2. Worse, you have JavaScript like jQuery-UI tabset plugin which actually drastically reorganizes your page with new script-enabled widget goodness. This is *very* jarring if you visit a page and see three stacked content divs, and then a second later they all disappear and a tabset shows up in place of it.

    The problem with FUBC is much more complex than just “defer the loading of your JavaScript”. It takes very careful UX planning to consider the experience the user will actually have in that small interim before JavaScript arrives.

    In fact, I’d argue that you can actually render a page TOO QUICKLY (I know, crazy coming from a performance nerd like me!) such that you should have kept things silent/hidden just a little longer until they were more ready for a user to see.

    ——–
    I obviously applaud your efforts to call attention to the area of needed work — getting pages to *feel* like they render faster. But I feel it’s going to take a LOT more than just deferring the JS loading.

  2. So simple and direct – thank you!

    But, one problem that I’ve hit multiple times: I have a page UI with Javascript behaviours attached to half the controls. If there’s a significant delay between render completion and JS execution, we have a period in which those controls will be unresponsive; any clicks will be entirely lost rather than just laggy.

    Are there good techniques for dealing with this? Is it worth executing a tiny bit of JS to attach a minimal event buffer to all the controls, or can we make the time to full behavioural attachment so short that it doesn’t matter?

  3. Great article Steve. Kyle raises some good points. I cannot count the number of times I’ve started to fill out a web form only to have some deferred JavaScript replace my information with place text.

    I think the key here is: plan. plan. plan. Performance optimizations are not something that can sprinkled onto a site like salt without any thinking or planning. Sometimes it makes sense to deferrer, sometimes it can foobar the functionality of site, maybe of a widget you don’t even control. If deferring is appropriate, you need to correct your logic to work in a deferred manner and not assume it has first crack at everything.

    Thanks!

  4. I’m agreeing with Getify here. There is a unwritten contract between render and behavior that needs to stay in sync. I can’t display a page with a link that doesn’t work. I’m not sure I would call it a FUBC myself, as it at first seems like an insult to a Canadian Provence, but the concept is dead on.

    Discussions here involve making sure you don’t have a FUBC on anything above the fold (Newspaper term for what you see on the average screen without scrolling). Below the fold you can load content then behavior.

    The net of PageSpeed is to reduce user frustration. Not seeing a page quickly, having it reflow before your eyes, and having it not respond as contractually promised by the inherent layout are all equal aspects of user experience and therefore, user happiness, time spent on site and revenue.

    Contractually important elements must function, even if enabled by javaScript, upon render. Niceties can wait.

    So the DIV that “pop’s up” a preview of a website in a higher z-index when clicked needs to work immediately. The fact that it fades in and dissolves out can wait until later.

    We solve this with a series of queues. A queue to load our scripts asynchronously with a callback to load its widgets and contents perhaps after domReady and a queue to load extra functionality after all of the original queues have emptied.

    This gives clear breaks as to what is contractually important, and what is just coolness.

    I don’t think that this: “My #1 goal right now is to figure out ways that web sites can defer all JavaScript execution until after the page has rendered.” is a good goal as it focuses on speed and display in lieu of user experience and “contracted” behavior.

    @davehamptonusa on twitter

  5. well, having a page displayed with no javascript and still being usable is one of the web development best practice anyway !
    remember developing in layers ? HTML first, then CSS, then JS, and at each step it should work
    it was good for accessibility, usability, SEO, you name it, and now it fits well with performances recomendations

    that does not mean that everyone does it, but good developers try anyway.
    About web applications however (a minority of sites, but the most interesting), I’m not sure how we are supposed to do without that level of JS. Probably having a loader like Y! mail used to and Gmail still shows is the only way if we dont want to dupplicate the coding effort on the backend and on the frontend.

  6. Great feedback.

    @Kyle, @Dave, & @Billy: I’m laying out a big challenge here. It’s going to require a lot of thinking and work to make the leap. But I guarantee you this – pages that require a lot of JS to render their content are going to fall behind those sites that figure out a way around that architecture. There are solutions to many of the problems you describe, and then you can counter with a scenario that falls outside that solution. Let’s move toward delaying JS and, if there’s a scenario where that’s difficult and we can’t figure out a solution (really?!) then so be it. We need to do more thinking around how to make this happen for as much of the JS as possible.

    @Yoz: Yes, the race condition quick click scenarios need to be addressed. Some JS that sets up stub functions to give the user feedback and perhaps queue events for later processing are good ideas.

    @jpvincent: Yes, what I’m really getting at here is progressive enhancement. It strikes me that we’re going to need better information on the server about what the client can support. That might not exist now, but I could foresee HTTP request headers that advertised JS support, for example.

  7. I think that Dav Glass’s recent work with node.js and YUI3 shows some promise here: by using server side JavaScript you can create the pre-ajax’ed content on the server without maintaining two codebases. Then the client gets the pre-rendered HTML AND the scripts could be loaded asynchronously.

  8. One other thing to think about is advertising… Most, if not all, advertising serving services (Google DFP, OpenX) rely on javascript to load ads.

    How much ad revenue would be lost by delaying loading and executing javascript? What about readers that bounce when the page is completely rendered but adverts are not yet loaded?

    Which brings the question… Are ad serving companies really “in sync” and realise that faster is better?

    What about a small publisher (like me), who uses Google DFP, which in turn loads the advertisers third party code? Who much latency is added there?

  9. This was one of the points I made during my Velocity talk about the Yahoo! homepage: we made sure the content got out to the browser fast and start pulling in JavaScript later. The flash of unbehaviored content is a side effect of bad design more than anything else. Progressively-enhanced sites can avoid that with a little careful thinking.

    But I’m curious, if your goal is to defer JavaScript until after the page is rendered, doesn’t do that already? It ensures the script isn’t executed until DOMContentLoaded, by which point the page should be rendered (if not complete, while waiting for images, etc. to download). You could also, optionally, just wait for onload and then dynamically add a node.

    At least at it’s surface, this doesn’t seem like a hard problem to solve. The real hard problem is getting people to untangle their page’s JavaScript that is likely to interfere with one of the aforementioned approaches.

    By the way, your CAPTCHA says “70 plus to”. Perhaps that should be “two”? :)

  10. @M Freitas: Here at adCompany ValueClick, we completely realize the value of page Speed and are beginning to roll out a new asynchronous/non-blocking interactive ad solution (CodeNamed: Platform X) That does just that. Loads when appropriate, doesn’t block the page, and plays nicely with others.

    We’re well past thinking that having our ads block the page so people can click em first is a good idea.

    We value long term relationships with our publishers and know the more successful they are, the more successful we are.

    @slicknet RE: CAPTCHA. know, it reeds gr8 as is.

  11. Steve,

    A timely and excellent post. The challenge is a huge one and one that will never be solved until you know DEVCAP. DEVCAP is the real time device capability. You have to be absolutely sure that the browser you’re talking to (sending the data too) can support what you’re asking it to do. Waiting for OEM’s to upgrade their browsers to support these new faster techniques is really going to take forever. It will have to be solved using other techniques.

    Two other things that need to be mentioned – compression and Mobile. Data compression is now becoming a MUST have. People think that Gzip is good enough. It’s not – you need more. BZ2 improves the compression rate by upwards of another 30%. When every second counts you need to send fewer bits. Secondly your blog omits Mobile. What you’ve said applies to Mobile but it’s never mentioned. We’re very fast approaching that old adage “this site best viewed in….”. Rendering first on Mobile is a must have. You have two seconds to get the customers attention.

    We were analyzing a mobile page the other day from Yahoo being sent to an Android browser. It was over 300k! People just don’t understand that it’s a packet based network not a copper network.

    Which of course leads me to this final conclusion – how the heck do you measure how long it takes each element to get to the target device. Without that data it’s all guesswork.

    In closing – your blog (IMO) is spot on. The culprit is JavaScript – and we haven’t seen anything yet – HTML5 anyone?

    Peter.

  12. @M Freitas & @Dave: Great to talk about ads! Dave is absolutely right – any advertiser who thinks blocking the page content so the ad can show first has a short term perspective. Content publishers might have to put up with that now, but advertising solutions (like ValueClick) are rolling out that solve that serve ads without hurting the user experience. Those advertising solutions can use speed as a competitive differentiator and publishers will start listening.

    @Nicholas: That’s why I loved your talk at Velocity – you made Yahoo FP the example of what can be done. I think your HTML tag got stripped – you were probably talking about SCRIPT ASYNC. Yes – that is an easy way in future browsers to achieve delayed execution. The problem is it’ll be years before a significant number of sites use it, and even after 10 years a majority of sites are likely to still not be using it. (I have a weird captcha to get rid of bots. My spam has dropped in half.)

  13. One approach I discovered for dealing with the FOUC caused by JavaScript altering the layout is to run a script that adds a “hasJs” class to the html tag. That way CSS can handle the display differences (which should be minimal given a progressive enhancement approach, little drop-down indicators and the like.)

    Here’s one write-up of the approach: http://meanders.nsfdesign.com/fouc-no-enhancing-progressive-enhancement

    Now what I’ve been debating is when to run the script? Run it as soon as possible to avoid page re-flow? Or run it once your JavaScript has executed to avoid indicating a behavior that hasn’t been added yet?

  14. Isn’t it a tradeoff?

    To take an extreme example, would it be worth it to have the page render 1 millisecond earlier if it meant that javascript behavior would be delayed an extra 5 seconds? Conversely, would it be worth it to have javascript behavior enabled the instant the page rendered if it meant that the page waited 5 seconds to display?

    It might not be possible to have everything we want, so perhaps we should look at the frontier of possibilities and pick the best point.

  15. Making the onload event trigger as fast as possible is still important. Here is a case where “old school” measurements may be useful…

    Take for instance a poweruser (agree its a tiny fraction of the web) searching the web for something. They would likely open multiple results in different tabs, and monitor the “spinning” thing on the tabs, the one which stops spinning first would be the first the user would read. — faster onload is fighting for eyeballs in this case.

    This based on my personal behaviour while browsing, I don’t have any numbers to show how many people do this, perhaps Google or Bing can shed some light on how many people open multiple tabs simultaneously.

    Thus is Amazon pages take longer to onload, while looking for products, it may not be the first site to get my attention.

  16. Ads are the worst!. Their snippets are evil for performance. I don’t understand how adsense and doubleclick are promoting of document.write sentence.

  17. Good points Steve. I’m using a similar method to the one Nils mentioned for one of our web app dashboards. Get the browser showing something, ANYTHING, to the user as quickly as possible followed by the required JS enhancements.

    The dashboard uses jQuery UI tabs, a Flash (shutter) chart, and an iframe widget along with other content. To speed it up I hide most of the un-enhanced content using CSS until JS can finish its job.

    The user sees a page with empty space instead of the main content but with fully functional top navigation quickly, giving the impression of a fast page load. The deferred JS then does its magic and fades in the rest. CSS is also used for sizing to prevent significant reflow.

  18. WebKit just implemented “defer” and “async” attributes on script tags last week!

    http://webkit.org/blog/1395/running-scripts-in-webkit/

    And it looks like it’s been implemented in Firefox as well:

    http://hacks.mozilla.org/2009/06/defer/

    This means IE4+, Opera, Firefox, and soon Safari and probably Chrome will all support at least the “defer” attribute!

  19. There are further next steps beyond script defer/async — to postpone loading javascript that only affects behaviour until the page is visited in a visible browser tab, i e, that the poweruser use case in comment 15 by sajal kayan would very quickly complete loading in a tab and only ever perform the rest of the js, if that tab is shown before the browser session ends.

    Other pay-offs here is in reduced sluggishness when a user loads multiple bookmarks in tabs, and at browser startup time in a revived session with many tabs.

    Writing the js that tests for “this document is in a visible tab”, and the callback triggering on state transition from being not so to being visible, is less than trivial, though.

  20. What are your thoughts on Charles Jolley’s method of deferring JS execution (also posted on Google Code? I haven’t put it through its paces yet, but if it’s good enough for the Gmail team, I think it’ll be good enough for me. I’m very tempted to try it out in an upcoming mobile project I’m working on.

  21. @David: scripts with the DEFER attribute block the onload event. Scripts with the ASYNC attribute are executed as soon as the response arrives so can block rendering. More is needed.

    @Andrew: See my blog post on Charles’ work.

  22. Interesting article. The type of script that immediately popped into my mind was the Google Web Optimizer / Visual Website Optimizer kind of JS for running A/B tests. It’s hugely powerful to do client-side DOM switching for these tests, but they depend on seamlessly displaying different content to different users and I’d be concerned about FUBC as pointed out by Kyle.

  23. Very true. We tried a very similar approach in the new Yahoo Mail beta (http://features.mail.yahoo.com) where we started parallel downloads of the 2 main JS files after initial flushes, but delayed the execution till the page was fully rendered and observed good perf improvements. Would love to see your analysis on it when you find time.

  24. Steve,
    This is an excellent resource for me to point my architects and developers toward. Our challenge is essentially too much .js and way too much .axd. I want to reduce the .axd, and convert them to .js so we can accelerate but now I feel like all that will do is get everything to the client faster (improve network time) but do nothing to quicken both the perceived or functional render.

    We’ve got some work to do…

  25. @Senthil P: Can you provide some information about your approach? Perhaps a YDN post?

  26. Great article; the comments are dead on too about delaying js can interfere with the user experience when the js does eventually load. We have been playing around with delaying js load until after the content but have found that it’s not feasible for all scripts for just that reason. The speed improvement benefit is negated by the jarring or broken user experience in the interim. Thank you for calling more attention and stimulating discussion on all of this though.

  27. Hope the following is considered relevant..

    I have been web dev’ing since about ’98, and although it its all been PC work, my background is 8/16 bit machines where size and speed is a balancing act (to get modern output on old hardware). Always my main concern was the speed of what I developed, 1st speed of delivery, then speed to render, then speed of “extras” (usually JS, but sometimes flash).

    I developed an Atari ST proto-type desktop (IE 5.5 only) back in 2000 on a 233MHz Win2K machine serving from IIS5. One of my main targets was to emulate the speed of the real desktop.

    I recently resurrected the proto-type for a PHP file manager. What I have now (click my name up top) is 1 PHP page @ 145Kb that serves a Desktop with either a file manager, or and editor. An empty Editor weighs in at 132Kb, includes fully functioning editor (S&R, Find with hilite/select, Goto , select & caret retension, 20(x5) font select, and a calc) with resizable & movable windows (with retension).

    I can edit the PHP file in the editor with in 5 seconds of opening it in the file manager, and 3.5 of those is waiting for the initial HTTP connect. The (both) menus are active and functioning with .5 sec, before the page has completely downloaded, let alone finished rendering.

    TIPS:
    1) inline everything: if its on your server or your server can get to it, put it in the HTML, avoid external include where possible (inc CSS + JS)
    2) pre code all HTML elements: including hidden (display:none), specifically layout & alignment. before any JS is run, it should look exactly how you want it to look.
    3) JS : A) dont calc anything on pre render, only define. B) inline JS in object attributes (ie dont have them call a function until it is absolutely necessary). C) bind html attribute/properties OFF html elements, not ON them. D) use variable reference, not expression reference, especially for comaprisons & loops. D) onload – copy any data OFF html elements that you are going to change regularly, place them on OFF objects. E) dont read from HTML elements, only write back what is needed/changed, pass ‘this.value’ if ‘value’ is needed. F) dont have AJAX/JS return fat objects or change data that doesnt need to be changed
    4) server-side: A) dont ECHO, PRINT or FLUSH anything, DUMP everything in one go. B) use a custom “Accept-Encoding” compression delivery function, dont use server default, its usually STORE or DEFLATE, use GZ 9 where possible, add BZ as well. C) pre program as much CSS, JS and HTML as physically possible (ie ALL) D) you can use includes, sepreate CSS + JS, just inline them into the HTML. E) use server-side functions to create HTML objects with pre-calc JS & CSS in element. F) write AJAX/SOAP/etc that return the absolute slimmest data set possible, you can GZ/DEFLATE AJAX for big data sets if need be. G) pre-patch for browsers on server, not in JS on browser, PHP/ASP parse the JS files first if need be (especially external includes)
    5) test/develop your HTML output on the slowest machine physically available. If its not fast here, others with notice the difference too.
    6) dont ECHO, PRINT or FLUSH anything, DUMP everything in one go. (yes I am repeating) I have a page that can output a 58Mb HTML table with time to spare, simply because of this statement. (I can provide code + output on request)

    The Atari ST prototype does have 2 external JS files, which save 12K + 18K download per page server. Both Desktops use (different) cookies to store page retension ‘onBeforeUnload’ and allow server-side to pre-calc HTML elements before rendering (including JS). Each HTML “group” (OS=Form) registers itself inline. ‘onLoad’ just sets caret, hilite/select, and scrollbar position + size. inline CSS + JS in head means all components are preset before BODY begins rendering. Object names + properties are identical across HTML, CSS, JS, AJAX, and PHP (or ASP, JSP, PL, CGI, OS etc when used).

    The result of all this is a working interface that is about 1000 times faster than anything else that even comes vaguely close to what it does. It serves at about the same speed as an equivalent JS-less HTML page, which are normally cached at 2-3 places between server & browser, with more functionality

    It doesn’t hog memory, browser or machine resources, yet it works identical on any browser from IE 5.5 era up (including KHTML), and it has never caused Chrome to “freeze”, slowdown or become unresponsive. It has never required a browser shutdown (unlike many jQuery sites do now, and some other libraries as well.)

    Probably the worst thing it does is use JS ‘onError’ in IMG tags to load an unfound image (even after server side pre-calc)

    Appologies for any “self promotion”, but at the end of the day, I use this app to web dev on a daily basis, so it has to be at least as resposive and functional as local machine apps. And it is, more so in some respects…

  28. one JS DOM tip I forgot:
    add ” name=’xObj’ id=’xObj’ ” to any HTML objects you want to reference directly, now you can with “xObj.prop” or “xObj.meth()” without doing a DOM lookup (this works on EVERY browser since IE3 + NS2) – but dont start the name with any form of ‘file..’ (eg. File, FILE, fIlE, etc) this breaks on every browser after IE6.0x, including FF3.6 and WebKit from 2008

    Soz for the length of prev post, but there are some good tips in there, and its fairly well condensed for 15 years of web dev..