State of Performance 2008

December 17, 2008 8:08 pm | 3 Comments

My Stanford class, CS193H High Performance Web Sites, ended last week. The last lecture was called “State of Performance”. This was my review of what happened in 2008 with regard to web performance, and my predictions and hopes for what we’ll see in 2009. You can see the slides (ppt, GDoc), but I wanted to capture the content here with more text. Let’s start with a look back at 2008.


Year of the Browser
This was the year of the browser. Browser vendors have put users first, competing to make the web experience faster with each release. JavaScript got the most attention. WebKit released a new interpreter called Squirrelfish. Mozilla came out with Tracemonkey – Spidermonkey with Trace Trees added for Just-In-Time (JIT) compilation. Google released Chrome with the new V8 JavaScript engine. And new benchmarks have emerged to put these new JavaScript engines through the paces: Sunspider, V8 Benchmark, and Dromaeo.

In addition to JavaScript improvements, browser performance was boosted in terms of how web pages are loaded. IE8 Beta came out with parallel script loading, where the browser continues parsing the HTML document while downloading external scripts, rather than blocking all progress like most other browsers. WebKit, starting with version 526, has a similar feature, as does Chrome 0.5 and the most recent Firefox nightlies (Minefield 3.1). On a similar note, IE8 and Firefox 3 both increased the number of connections opened per server from 2 to 6. (Safari and Opera were already ahead with 4 connections per server.) (See my original blog posts for more information: IE8 speeds things up, Roundup on Parallel Connections, UA Profiler and Google Chrome, and Firefox 3.1: raising the bar.)

Velocity 2008, the first conference focused on web performance and operations, launched June 23-24. Jesse Robbins and I served as co-chairs. This conference, organized by O’Reilly, was densely packed – both in terms of content and people (it sold out!). Speakers included John Allspaw (Flickr), Luiz Barroso (Google), Artur Bergman (Wikia), Paul Colton (Aptana), Stoyan Stefanov (Yahoo!), Mandi Walls (AOL), and representatives from the IE and Firefox teams. Velocity 2009 is slated for June 22-24 in San Jose and we’re currently accepting speaker proposals.
Improving web performance starts with measuring performance. Measurements can come from a test lab using tools like Selenium and Watir. To get measurements from geographically dispersed locations, scripted tests are possible through services like Keynote, Gomez, Webmetrics, and Pingdom. But the best data comes from measuring real user traffic. The basic idea is to measure page load times using JavaScript in the page and beacon back the results. Many web companies have rolled their own versions of this instrumentation. It isn’t that complex to build from scratch, but there are a few gotchas and it’s inefficient for everyone to reinvent the wheel. That’s where Jiffy comes in. Scott Ruthfield and folks from released Jiffy at Velocity 2008. It’s Open Source and easy to use. If you don’t currently have real user load time instrumentation, take a look at Jiffy.
JavaScript: The Good Parts
Moving forward, the key to fast web pages is going to be the quality and performance of JavaScript. JavaScript luminary Doug Crockford helps lights the way with his book JavaScript: The Good Parts, published by O’Reilly. More is needed! We need books and guidelines for performance best practices and design patterns focused on JavaScript. But Doug’s book is a foundation on which to build.
My former colleagues from the Yahoo! Exceptional Performance team, Stoyan Stefanov and Nicole Sullivan, launched In addition to a great name and beautiful web site, packs some powerful functionality. It analyzes the images on a web page and calculates potential savings from various optimizations. Not only that, it creates the optimized versions for download. Try it now!
Google Ajax Libraries API
JavaScript frameworks are powerful and widely used. Dion Almaer and the folks at Google saw an opportunity to help the development community by launching the Ajax Libraries API. This service hosts popular frameworks including jQuery, Prototype,, MooTools, Dojo, and YUI. Web sites using any of these frameworks can reference the copy hosted on Google’s worldwide server network and gain the benefit of faster downloads and cross-site caching. (Original blog post: Google AJAX Libraries API.)
UA Profiler
Okay, I’ll get a plug for my own work in here. UA Profiler looks at browser characteristics that make pages load faster, such as downloading scripts without blocking, max number of open connections, and support for “data:” URLs. The tests run automatically – all you have to do is navigate to the test page from any browser with JavaScript support. The results are available to everyone, regardless of whether you’ve run the tests. I’ve been pleased with the interest in UA Profiler. In some situations it has identified browser regressions that developers have caught and fixed.


Here’s what I think and hope we’ll see in 2009 for web performance.

Visibility into the Browser
Packet sniffers (like HTTPWatch, Fiddler, and WireShark) and tools like YSlow allow developers to investigate many of the “old school” performance issues: compression, Expires headers, redirects, etc. In order to optimize Web 2.0 apps, developers need to see the impact of JavaScript and CSS as the page loads, and gather stats on CPU load and memory consumption. Eric Lawrence and Christian Stockwell’s slides from Velocity 2008 give a hint of what’s possible. Now we need developer tools that show this information.
Think “Web 2.0”
Web 2.0 pages are often developed with a Web 1.0 mentality. In Web 1.0, the amount of CSS, JavaScript, and DOM elements on your page was more tolerable because it would be cleared away with the user’s next action. That’s not the case in Web 2.0. Web 2.0 apps can persist for minutes or even hours. If there are a lot of CSS selectors that have to be parsed with each repaint - that pain is felt again and again. If we include the JavaScript for all possible user actions, the size of JavaScript bloats and increases memory consumption and garbage collection. Dynamically adding elements to the DOM slows down our CSS (more selector matching) and JavaScript (think getElementsByTagName). As developers, we need to develop a new way of thinking about the shape and behavior of our web apps in a way that addresses the long page persistence that comes with Web 2.0.
Speed as a Feature
In my second month at Google I was ecstatic to see the announcement that landing page load time was being incorporated into the quality score used by Adwords. I think we’ll see more and more that the speed of web pages will become more important to users, more important to aggregators and vendors, and subsequently more important to web publishers.
Performance Standards
As the industry becomes more focused on web performance, a need for industry standards is going to arise. Many companies, tools, and services measure “response time”, but it’s unclear that they’re all measuring the same thing. Benchmarks exist for the browser JavaScript engines, but benchmarks are needed for other aspects of browser performance, like CSS and DOM. And current benchmarks are fairly theoretical and extreme. In addition, test suites are needed that gather measurements under more real world conditions. Standard libraries for measuring performance are needed, a la Jiffy, as well as standard formats for saving and exchanging performance data.
JavaScript Help
With the emergence of Web 2.0, JavaScript is the future. The Catch-22 is that JavaScript is one of the biggest performance problems in a web page. Help is needed so JavaScript-heavy web pages can still be fast. One specific tool that’s needed is something that takes a monolithic JavaScript payload and splits into smaller modules, with the necessary logic to know what is needed when. Doloto is a project from Microsoft Research that tackles this problem, but it’s not available publicly. Razor Optimizer attacks this problem and is a good first step, but it needs to be less intrusive to incorporate this functionality.

Browsers also need to make it easier for developers to load JavaScript with less of a performance impact. I’d like to see two new attributes for the SCRIPT tag: DEFER and POSTONLOAD. DEFER isn’t really “new” – IE has had the DEFER attribute since IE 5.5. DEFER is part of the HTML 4.0 specification, and it has been added to Firefox 3.1. One problem is you can’t use DEFER with scripts that utilize document.write, and yet this is critical for mitigating the performance impact of ads. Opera has shown that it’s possible to have deferred scripts still support document.write. This is the model that all browsers should follow for implementing DEFER. The POSTONLOAD attribute would tell the browser to load this script after all other resources have finished downloading, allowing the user to see other critical content more quickly. Developers can work around these issues with more code, but we’ll see wider adoption and more fast pages if browsers can help do the heavy lifting.

Focus on Other Platforms
Most of my work has focused on the desktop browser. Certainly, more best practices and tools are needed for the mobile space. But to stay ahead of where the web is going we need to analyze the user experience in other settings including automobile, airplane, mass transit, and 3rd world. Otherwise, we’ll be playing catchup after those environments take off.
Fast by Default
I enjoy Tom Hanks’ line in A League of Their Own when Geena Davis (“Dottie”) says playing ball got too hard: “It’s supposed to be hard. If it wasn’t hard, everyone would do it. The hard… is what makes it great.” I enjoy a challenge and tackling a hard problem. But doggone it, it’s just too hard to build a high performance web site. The bar shouldn’t be this high. Apache needs to turn off ETags by default. WordPress needs to cache comments better. Browsers need to cache SSL responses when the HTTP headers say to. Squid needs to support HTTP/1.1. The world’s web developers shouldn’t have to write code that anticipates and fixes all of these issues.

Good examples of where we can go are Runtime Page Optimizer and Strangeloop appliances. RPO is an IIS module (and soon Apache) that automatically fixes web pages as they leave the server to minify and combine JavaScript and stylesheets, enable CSS spriting, inline CSS images, and load scripts asynchronously. (Original blog post: Runtime Page Optimizer.) The web appliance from Strangeloop does similar realtime fixes to improve caching and reduce payload size. Imagine combining these tools with and Doloto to automatically improve the performance of web pages. Companies like Yahoo! and Google might need more customized solutions, but for the other 99% of developers out there, it needs to be easier to make pages fast by default.

This is a long post, but I still had to leave out a lot of performance highlights from 2008 and predictions for what lies ahead. I look forward to hearing your comments.

3 Responses to State of Performance 2008

  1. Hi,

    Great summary of all the good stuff happening.
    Have you tried to work directly with the browser vendors to integrate your tools in their development process ? Or ask for new developer tools regarding performance ?

    Have you also submitted bugs to different projects (like Apache, WordPress, Squid etc) ? If yes, what is the status of these bugs ?

    I’d like to help with these things so that performance could be improved everywhere.

  2. Steve,

    Do you feel that the recent dramatic improvements in javascript runtime pushes out the need to focus on some of the things you mention as becoming more important next year? Ideally we’d be tackling performance bottlenecks from both ends of the spectrum, but I’m betting that if javascript execution gets to be magnitudes faster out of the box with the newer browsers, and complexity of the applications grows linearly (which is maybe the key question), developers will end up spending less time focusing on optimizing low level CSS and Javascript performance issues, and have more time to focus on the functionality (which feels like a good thing). Not to say that it won’t be necessary, but feels like the need by be pushed out at least another year for most businesses?

  3. One additional thought…an area that you briefly mentioned that should see a huge increase in attention is the mobile performance space. Up to now the mobile web world hasn’t been a big source of revenue for businesses. With the iPhone and it’s knockoffs, there’s no doubt some serious business is going to be done using mobile apps in the coming months, and that’s when companies are going to start getting really concerned about performance and reliability.

    I don’t know of a single tool to help you optimize the performance in a mobile environment. It get’s even trickier if you’re using native apps on the phone…how the heck can you measure and track the performance of those across the various networks and platforms.

    So many problems…but so many opportunities to build tools to solve these problems. Where there are problems, there is money to be made.