Moving beyond window.onload()

May 13, 2013 9:13 am | 11 Comments

[Originally posted in the 2012 Performance Calendar. Reposting here for folks who missed it.]

There’s an elephant in the room that we’ve been ignoring for years:

window.onload is not the best metric for measuring website speed

We haven’t actually been “ignoring” this issue. We’ve acknowledged it, but we haven’t coordinated our efforts to come up with a better replacement. Let’s do that now.

window.onload is so Web 1.0

What we’re after is a metric that captures the user’s perception of when the page is ready. Unfortunately, perception.ready() isn’t on any browser’s roadmap. So we need to find a metric that is a good proxy.

Ten years ago, window.onload was a good proxy for the user’s perception of when the page was ready. Back then, pages were mostly HTML and images. JavaScript, CSS, DHTML, and Ajax were less common, as were the delays and blocked rendering they introduce. It wasn’t perfect, but window.onload was close enough. Plus it had other desirable attributes:

  • standard across browsers - window.onload means the same thing across all browsers. (The only exception I’m aware of is that IE 6-9 don’t wait for async scripts before firing window.onload, while most other browsers do.)
  • measurable by 3rd parties – window.onload is a page milestone that can be measured by someone other than the website owner, e.g., metrics services like Keynote Systems and tools like Boomerang. It doesn’t require website owners to add custom code to their pages.
  • measurable for real users – Measuring window.onload is a lightweight operation, so it can be performed on real user traffic without harming the user experience.

Web 2.0 is more dynamic

Fast forward to today and we see that window.onload doesn’t reflect the user perception as well as it once did.

There are some cases where a website renders quickly but window.onload fires much later. In these situations the user perception of the page is fast, but window.onload says the page is slow. A good example of this is Amazon product pages. Amazon has done a great job of getting content that’s above-the-fold to render quickly, but all the below-the-fold reviews and recommendations produce a high window.onload value. Looking at these Amazon WebPagetest results we see that above-the-fold is almost completely rendered at 2.0 seconds, but window.onload doesn’t happen until 5.2 seconds. (The relative sizes of the scrollbar thumbs shows that a lot of content was added below-the-fold.)


Amazon – 2.0 seconds (~90% rendered)

Amazon – 5.2 seconds (onload)

But the opposite is also true. Heavily dynamic websites load much of the visible page after window.onload. For these websites, window.onload reports a value that is faster than the user’s perception. A good example of this kind of dynamic web app is Gmail. Looking at the WebPagetest results for Gmail we see that window.onload is 3.3 seconds, but at that point only the progress bar is visible. The above-the-fold content snaps into place at 4.8 seconds. It’s clear that in this example window.onload is not a good approximation for the user’s perception of when the page is ready.


Gmail – 3.3 seconds (onload)

Gmail – 4.8 seconds (~90% rendered)

it’s about rendering, not downloads

The examples above aren’t meant to show that Amazon is fast and Gmail is slow. Nor is it intended to say whether all the content should be loaded before window.onload vs. after. The point is that today’s websites are too dynamic to have their perceived speed reflected accurately by window.onload.

The reason is because window.onload is based on when the page’s resources are downloaded. In the old days of only text and images, the readiness of the page’s content was closely tied to its resource downloads. But with the growing reliance on JavaScript, CSS, and Ajax the perceived speed of today’s websites is better reflected by when the page’s content is rendered. The use of JavaScript and CSS is growing. As the adoption of these dynamic techniques increases, so does the gap between window.onload and the user’s perception of website speed. In other words, this problem is just going to get worse.

The conclusion is clear: the replacement for window.onload must focus on rendering.

what “it” feels like

This new performance metric should take rendering into consideration. It should be more than “first paint”. Instead, it should capture when the above-the-fold content is (mostly) rendered.

I’m aware of two performance metrics that exist today that are focused on rendering. Both are available in WebPagetest. Above-the-fold render time (PDF) was developed at Google. It finds the point at which the page’s content reaches its final rendering, with intelligence to adapt for animated GIFs, streaming video, rotating ads, etc. The other technique, called Speed Index and developed by Pat Meenan, gives the “average time at which visible parts of the page are displayed”. Both of these techniques use a series of screenshots to do their analysis and have the computational complexity that comes with image analysis.

In other words, it’s not feasible to perform these rendering metrics on real user traffic in their current form. That’s important because, in addition to incorporating rendering, this new metric must maintain the attributes mentioned previously that make window.onload so appealing: standard across browsers, measurable by 3rd parties, and measurable for real users.

Another major drawback to window.onload is that it doesn’t work for single page web apps (like Gmail). These web apps only have one window.onload, but typically have several other Ajax-based “page loads” during the user session where some or most of the page content is rewritten. It’s important that this new metric works for Ajax apps.

ball rolling

I completely understand if you’re frustrated by my lack of implementation specifics. Measuring rendering is complex. The point at which the page is (mostly) rendered is so obvious when flipping through the screenshots in WebPagetest. Writing code that measures that in a consistent, non-impacting way is really hard. My officemate pointed me to this thread from the W3C Web Performance Working Group talking about measuring first paint that highlights some of the challenges.

To make matters worse, the new metric that I’m discussing is likely much more complex than measuring first paint. I believe we need to measure when the above-the-fold content is (mostly) rendered. What exactly is “above-the-fold”? What is “mostly”?

Another challenge is moving the community away from window.onload. The primary performance metric in popular tools such as WebPagetest, Google Analytics Site Speed, Torbit Insight, SOASTA (LogNormal) mPulse, and my own HTTP Archive is window.onload. I’ve heard that some IT folks even have their bonuses based on the window.onload metrics reported by services like Keynote Systems and Gomez.

It’s going to take time to define, implement, and transition to a better performance metric. But we have to get the ball rolling. Relying on window.onload as the primary performance metric doesn’t necessarily produce a faster user experience. And yet making our websites faster for users is what we’re really after. We need a metric that more accurately tracks our progress toward this ultimate goal.

11 Responses to Moving beyond window.onload()

  1. Three cheers! Completely agree with the spirit of this post. I think this transition is long overdue and super critical to taking the next steps in web perf optimization.

    I’m not sure I agree that rendering is really the right target for this next step, though. I could have a background texture image that fills all the pixels above the fold with “content” but it’s not the content a user cares about.

    When I talk about the shortcomings of `window.onload`, I usually say, “what we really need for an event is ‘meaningfully-interactive'”. And I have tried and tried to conceive of a way for a browser (or 3rd party lib) to automatically determine that reliably for a site.

    I now think what we need to do is have a way (or multiple ways) for a site to signal to the browser (and other 3rd party libs), “hey, I’m ‘meaningfully interactive’ now.”

    This event would ideally be fired by the page’s code (or markup) shortly after the DOMContentLoaded was fired, but a a complex app could delay it longer if there was more stuff that it felt it needed to load before a user could have a meaningful experience with the site.

    By contrast, a simple text blog post sort of site could fire it immediately after the text is painted, because for that site, “meaningful interaction” is passive visual reading of the text.

    I kind of envision an optional HTTP header and/or meta-tag that could specify one of a preset of triggers, like “first-paint”, “DOMContentLoaded”, “onload”, or several other possible choices. If not present, it might default to “onload” for legacy sake.

    Many sites would be able to make good usage of those preset values, to slightly customize how “ready” is defined for them. If such a tag/header was found the browser could use that, and then fire a “InteractionReady” event on the page so that a third-party lib could detect it.

    Or, a more complex app could set the header/value to “none”, and then their app could manually fire the “InteractionReady” event themselves at whatever point the app felt was appropriate.

    The polyfill for this for older browsers is straightforward: FT for the event name in the window object, and if not present (older browser), add a window.onload handler that artificially fires a custom DOM event called “InteractionReady”.

    Anyway, this is just an idea of how I’d like to see this kind of thing proceed. I’m sure there’s plenty of details to work out. Hopefully your post gets the ball rolling. :)

  2. Steve – I think everyone agrees that “onload” metric is not sufficient. It is useful in some cases and completely useless in other cases. However, it is the one metric you can accurately measure across browsers and you know what it means for a given page (assuming you know well the page design and implementation).
    While understanding rendering and measuring this properly is key , it might not be sufficient for a lot of cases.
    We are all looking for a metric that measures when the page is ready for the end user perform the tasks at hand. Obviously reading/viewing is primary task in all cases for the web, but in a lot of cases there are other tasks. In your Gmail inbox example the user needs to view the emails, an than click and view a specific thread. In Amazon the end user might want to add to the cart or search again. The challenge with browsers is that things can be visible, but the user cannot interact therefore not perform these other “non-viewing” actions. Nothing more frustrating than waiting for items to be clickable, or a button to appear.

    Onload fails at this sometimes, so do the rendering metrics. Therefore if there is initiative to better measure “rendering time”, we also need to take a look at “time to interact”.

  3. I have had to address this problem for a purely practical reason: test automation is very hard if you can’t figure out when the page is done.

    To test a page, Selenium has to click on elements. If those elements don’t exist yet, you get a test failure.

    If you have enough tests, you’re guaranteed that at least one of them will click ‘too soon’ and then every test run results in failures that have to be evaluated by hand. I have found this to be exhausting for everyone involved so we try to avoid this.

    What we’ve done is put a DOM element in the page that gets modified when the page has ‘settled’. We still occasionally forget to do that and run into test failures, but mostly we can fix them.

  4. Well said!

    This is the problem with virtually every metric. We want “making our websites faster for users”. But it’s hard to define, and hard to measure. So we settle for something that is easier to measure, but isn’t quite what we really want. And then we cross or fingers and hope there is some correlation between what we measure and what we actually want. Sadly, many metrics degrade into well-intentioned folklore, best-practices, and a cargo cult mentality.

    I applaud your efforts! I look forward to some progress toward anything which can accurately indicate “faster for users”, yet works across browsers and is external to the site under test.

  5. It’s worth noting that vendors are avid followers of your work. Keynote has added additional metrics to their product to give approximate timings for user experience:
    http://www.keynote.com/mykeynote/help/components.asp#user_ex

    I’ve haven’t looked at it in any detail but I believe that it is no longer exclusive to Internet Explorer.

  6. Steve,

    Nothing is perfect (therefore everything comes with some type of imperfection). Asking which metric is like asking which statistical calculation (Arithmetic Mean versus Geometric Mean versus etc). If the answer is contextual, then suggest ReadyState or OnLoad are still two of the least imperfect metrics available.

    Leo

  7. Steve,

    Kudos for taking initiative on this. Something worth mentioning for the masses is that you already have page load screenshots on HTTP Archive for those who want to start going down the image analysis route. I’m not sure how easy it is to consume those images in bulk, but they are there nonetheless and give us the ability to identify some of the differences you mentioned (i.e. gmail vs amazon).

    All the best,
    Jared

  8. Hi

    The way I look at this is the best way to over come such a complex task as determining when a site is ready to use, as far as real humans are concerned, is to use those users and their brains to do all the heavy lifting.

    A metric that determines when the users begin interacting with content in relation to the start of the browsers work (HTTP GET, AJAX fired etc), would be the most useful, and probably fairly easy to implement – JS at top of the page (yes I know that is a sin) that watches the mouse/keyboard for user behaviour in the region of the browser display (or more specific areas of the DOM for AJAX events) (clearly you’d want to ignore users reading their emails whilst waiting for a page to load).

    Clearly this is only of use in systems where RUM is possible (not dev, test, CI etc).

    Paul

  9. So do all those third party scripts really slow a site down?

    After looking at my site recently I was horrified to discover that third party services like Outbrain, Disqus and Share this were adding something like 1.5Meg to my pages.

    I mean my content only takes up 50k, which would give me a code to text ration of over 99%

    PS Your spam question is very confusing.

    That can’t be good, or does it not matter anymore?

  10. Hmm, no way to edit my comment. above. The PS should be at the very end for the whole comment to read properly.

  11. Surely there are a whole variety of browser internal events that may have never hitherto been exposed to page content, that we can get browser companies to allow us to use. It shouldn’t be too hard to keep the finite certainty of using non-qualitative metrics. We need to get events such as CSS rendering completed (pre animations), images rendered (as opposed to just loaded, thus factoring in progressive loading). Browsers already have scroll methods so they know when an element is outside the viewport. That information just needs to be refined so that we can start looking at above-the-fold timing.

    As usual the technicalities may not be that hard, it’s the human factor of getting all the browser vendors, standards committees and developer communities to agree! That process continues to take too long, even if it is getting faster.