Coupling asynchronous scripts

December 27, 2008 11:32 pm | 14 Comments

This post is based on a chapter from Even Faster Web Sites, the follow-up to High Performance Web Sites. Posts in this series include: chapters and contributing authors, Splitting the Initial Payload, Loading Scripts Without Blocking, Coupling Asynchronous Scripts, Positioning Inline Scripts, Sharding Dominant Domains, Flushing the Document Early, Using Iframes Sparingly, and Simplifying CSS Selectors.


Much of my recent work has been around loading external scripts asynchronously. When scripts are loaded the normal way (<script src="...">) they block all other downloads in the page and any elements below the script are blocked from rendering. This can be seen in the Put Scripts at the Bottom example. Loading scripts asynchronously avoids this blocking behavior resulting in a faster loading page.

One issue with async script loading is dealing with inline scripts that use symbols defined in the external script. If the external script is loading asynchronously without thought to the inlined code, race conditions may result in undefined symbol errors. It’s necessary to ensure that the async external script and the inline script are coupled in such a way that the inlined code isn’t executed until after the async script finishes loading.

There are a few ways to couple async scripts with inline scripts.

  • window’s onload – The inlined code can be tied to the window’s onload event. This is pretty simple to implement, but the inlined code won’t execute as early as it could.
  • script’s onreadystatechange – The inlined code can be tied to the script’s onreadystatechange and onload events.  (You need to implement both to cover all popular browsers.) This code is lengthier and more complex, but ensures that the inlined code is called as soon as the script finishes loading.
  • hardcoded callback – The external script can be modified to explicitly kickoff the inlined script through a callback function. This is fine if the external script and inline script are being developed by the same team, but doesn’t provide the flexibility needed to couple 3rd party scripts with inlined code.

In this blog post I talk about two parallel (no pun intended) issues: how async scripts make the page load faster, and how async scripts and inline scripts can be coupled using a variation of John Resig’s Degrading Script Tags pattern. I illustrate these through a recent project I completed to make the UA Profiler results sortable. I did this using Stuart Langridge’s sorttable script. It took me ~5 minutes to add his script to my page and make the results table sortable. With a little more work I was able to speed up the page by more than 30% by using the techniques of async script loading and coupling async scripts.

Normal Script Tags

I initially added Stuart Langridge’s sorttable script to UA Profiler in the typical way (<script src="...">), as seen in the Normal Script Tag implementation. The HTTP waterfall chart is shown in Figure 1.

Normal Script Tags
Figure 1: Normal Script Tags HTTP waterfall chart

The table sorting worked, but I wasn’t happy with how it made the page slower. In Figure 1 we see how my version of the script (called “sorttable-async.js”) blocks the only other HTTP request in the page (“arrow-right-20×9.gif”), which makes the page load more slowly. These waterfall charts were captured using Firebug 1.3 beta. This new version of Firebug draws a vertical red line where the onload event occurs. (The vertical blue line is the domcontentloaded event.) For this Normal Script Tag version, onload fires at 487 ms.

Asynchronous Script Loading

The “sorttable-async.js” script isn’t necessary for the initial rendering of the page – sorting columns is only possible after the table has been rendered. This situation (external scripts that aren’t used for initial rendering) is a prime candidate for asynchronous script loading. The Asynchronous Script Loading implementation loads the script asynchronously using the Script DOM Element approach:

var script = document.createElement('script');
script.src = "sorttable-async.js";
script.text = "sorttable.init()"; // this is explained in the next section

The HTTP waterfall chart for this Asynchronous Script Loading implementation is shown in Figure 2. Notice how using an asynchronous loading technique avoids the blocking behavior – “sorttable-async.js” and “arrow-right-20×9.gif” are loaded in parallel. This pulls in the onload time to 429 ms.

Async Script Loading
Figure 2: Asynchronous Script Loading HTTP waterfall chart

John Resig’s Degrading Script Tags Pattern

The Asynchronous Script Loading implementation makes the page load faster, but there is still one area for improvement. The default sorttable implementation bootstraps itself by attaching “sorttable.init()” to the onload handler. A performance improvement would be to call “sorttable.init()” in an inline script to bootstrap the code as soon as the external script was done loading. In this case, the “API” I’m using is just one function, but I wanted to try a pattern that would be flexible enough to support a more complex situation where the module couldn’t assume what API was going to be used.

I previously listed various ways that an inline script can be coupled with an asynchronously loaded external script: window’s onload, script’s onreadystatechange, and hardcoded callback. Instead, I used a technique derived from John Resig’s Degrading Script Tags pattern. John describes how to couple an inline script with an external script, like this:

<script src="jquery.js">

The way his implementation works is that the inlined code is only executed after the external script is done loading. There are several benefits to coupling inline and external scripts this way:

  • simpler – one script tag instead of two
  • clearer – the inlined code’s dependency on the external script is more obvious
  • safer – if the external script fails to load, the inlined code is not executed, avoiding undefined symbol errors

It’s also a great pattern to use when the external script is loaded asynchronously. To use this technique, I had to change both the inlined code and the external script. For the inlined code, I added the third line shown above that sets the script.text property. To complete the coupling, I added this code to the end of “sorttable-async.js”:

var scripts = document.getElementsByTagName("script");
var cntr = scripts.length;
while ( cntr ) {
    var curScript = scripts[cntr-1];
    if ( -1 != curScript.src.indexOf('sorttable-async.js') ) {
        eval( curScript.innerHTML );

This code iterates over all scripts in the page until it finds the script block that loaded itself (in this case, the script with src containing “sorttable-async.js”). It then evals the code that was added to the script (in this case, “sorttable.init()”) and thus bootstraps itself. (A side note: although the line of code was added using the script’s text property, here it’s referenced using the innerHTML property. This is necessary to make it work across browsers.) With this optimization, the external script loads without blocking other resources, and the inlined code is executed as soon as possible.

Lazy Loading

The load time of the page can be improved even more by lazyloading this script (loading it dynamically as part of the onload handler). The code behind this Lazyload version just wraps the previous code within the onload handler:

window.onload = function() {
    var script = document.createElement('script');
    script.src = "sorttable-async.js";
    script.text = "sorttable.init()";

This situation absolutely requires this script coupling technique. The previous bootstrapping code that called “sorttable.init()” in the onload handler won’t be called here because the onload event has already passed. The benefit of lazyloading the code is that the onload time occurs even sooner, as shown in Figure 3. The onload event, indicated by the vertical red line, occurs at ~320 ms.

Figure 3: Lazyloading HTTP waterfall chart


Loading scripts asynchronously and lazyloading scripts improve page load times by avoiding the blocking behavior that scripts typically cause. This is shown in the different versions of adding sorttable to UA Profiler:

The times above indicate when the onload event occurred. For other web apps, improving when the asynchronously loaded functionality is attached might be a higher priority. In that case, the Asynchronous Script Loading version is slightly better (~400 ms versus 417 ms). In both cases, being able to couple inline scripts with the external script is a necessity. The technique shown here is a way to do that while also improving page load times.


(sharing) OLPC XO

December 26, 2008 1:05 pm | 4 Comments

For Christmas, I bought a XO laptop through the One Laptop Per Child program for two of my girls to share. (The irony of getting a One Laptop Per Child XO for the two of them to share just hit me ;-) My oldest daughter has her own MacBook, so I wanted a low priced alternative that my two younger girls could call their own. And the charitable aspect of OLPC was a motivator, especially at this time of year. The way it works is you pay for two laptops; one comes to you and the other goes to a child in a developing country. Since my employer, Google, does charitable matching, my purchase resulted in two laptops being donated.

It was incredibly easy. I purchased the XO on Amazon for $399. Our XO arrived in plenty of time for Christmas. I received a receipt from OLPC for my $200 tax deduction. And I submitted the matching gift form at work. Google is a big supporter of OLPC, so that organization was already in the matching system making it even easier to complete the process.

We’ve been using it for a day, and so far the girls and I love it. They really like having their own laptop. They’re familiar with web browsers, so were able to get immediate benefit. Today we’ll start exploring the other applications. I like that the XO is built for the tough treatment kids can deliver. It was easy to get setup. And my girls immediately commented that it was much easier to use than our other computers.

The first thing I did was run it through the UA Profiler test suite to see how performant the browser is. OLPC’s browser scored 7/11, the same as IE8 and Firefox 2. Not bad. The OLPC browser is based on XULRunner, and therefore uses the the Gecko rendering engine as does Firefox. I was surprised that mine was the first OLPC test in the UA Profiler results. OLPC’s browser is missing parallel script loading, and only opens 2 connections per server.

I’m looking forward to learning more about the XO. I’m pleased with their Support Wiki. It had the exact info I needed to install the latest Flash player. I’m downloading various manuals now, and can’t wait to explore the other applications. I have to keep reminding myself: this was a gift for them. Maybe I can use it after they’re asleep.


UA Profiler improvements

December 19, 2008 11:28 pm | 1 Comment

UA Profiler is the tool I released 3 months ago that tracks the performance traits of various browsers. It’s a community-driven project – as more people use it, the data has more coverage and accuracy. So far, 7000 people have run 10,000 tests across 150 different browser versions (2500 unique User Agents). Over the past week (since my Stanford class ended), I’ve been adding some requested improvements.


Previously, I had one label for a browser. For example, Firefox 3.0 and 3.1 results were all lumped under “Firefox 3”. This week I added the ability to drilldown to see more detailed data. The results can be viewed in five ways:

  1. Top Browsers – The most popular browsers as well as major new versions on the horizon.
  2. Browser Families – The full list of unique browser names: Android, Avant, Camino, Chrome, etc.
  3. Major Versions – Grouped by first version number: Firefox 2, Firefox 3, IE 6, IE7, etc.
  4. Minor Versions – Grouped by first and second version numbers: Firefox 3.0, Firefox 3.1, Chrome 0.2, Chrome 0.3, etc.
  5. All Versions – At most I save three levels of version numbers. Here you can see Firefox 3.0.1, Firefox 3.0.2, Firefox 3.0.3, etc.
hiding sparse data
The result tables grew lengthy due to unusual User Agent strings with atypical version numbers. These might be the result of nightly builds or manual tweaking of the User Agent. Now, I only show browsers tested by at least two different people a total of four or more times. If you want to see all browsers, regardless of the amount of testing, check “show sparse results” at the top.
individual tests
Several people asked to see the individual test results, that is, each test that was run for a certain browser. There were several motivations: Was there much variation for test X? What were the exact User Agent strings that were matched to this browser? When were the tests done (because that problem was fixed on such-and-such a date)? When looking at a results table, clicking on the Browser name will open a new table that shows the results for each test under that browser.
Once I sat down to do it, it took me ~5 minutes to make the results table sortable using Stuart Langridge’s sorttable. Now you can sort to your heart’s content. (This weekend I’ll write a post about how I made his code work when loaded asynchronously using a variation of John Resig’s Degrading Script Tags pattern.)

UA Profiler has been quite successful, gathering a consistent amount of testing each day. I especially enjoy seeing people running nightly builds against it. It’s fun to look at the individual tests and get some visibility into how a browser’s code base evolves. For example, looking at the details for Chrome 1.0, we see that Chrome 1.0.154 failed the “Parallel Scripts” test, but Chrome 1.0.155 passed. Looking at the User Agent strings we see that Chrome 1.0.154 was built using WebKit 525, whereas Chrome 1.0.155 upgraded to WebKit 528. The upgrade in WebKit version was the key to attaining this important performance trait.

I also know of at least one case of a browser regression that the development team fixed because it was flagged by UA Profiler. This is an amazing side effect of tracking these performance traits – actually helping browser teams improve the performance of their browsers, the browsers that you and I use every day. My next task is to improve the tests in UA Profiler. I’ll work on that. Your job is to keep running your favorite browser (and mobile web device!) through the UA Profiler test suite to highlight what’s being done right, and what more is needed to make our web experience fast by default.

1 Comment

State of Performance 2008

December 17, 2008 8:08 pm | 3 Comments

My Stanford class, CS193H High Performance Web Sites, ended last week. The last lecture was called “State of Performance”. This was my review of what happened in 2008 with regard to web performance, and my predictions and hopes for what we’ll see in 2009. You can see the slides (ppt, GDoc), but I wanted to capture the content here with more text. Let’s start with a look back at 2008.


Year of the Browser
This was the year of the browser. Browser vendors have put users first, competing to make the web experience faster with each release. JavaScript got the most attention. WebKit released a new interpreter called Squirrelfish. Mozilla came out with Tracemonkey – Spidermonkey with Trace Trees added for Just-In-Time (JIT) compilation. Google released Chrome with the new V8 JavaScript engine. And new benchmarks have emerged to put these new JavaScript engines through the paces: Sunspider, V8 Benchmark, and Dromaeo.

In addition to JavaScript improvements, browser performance was boosted in terms of how web pages are loaded. IE8 Beta came out with parallel script loading, where the browser continues parsing the HTML document while downloading external scripts, rather than blocking all progress like most other browsers. WebKit, starting with version 526, has a similar feature, as does Chrome 0.5 and the most recent Firefox nightlies (Minefield 3.1). On a similar note, IE8 and Firefox 3 both increased the number of connections opened per server from 2 to 6. (Safari and Opera were already ahead with 4 connections per server.) (See my original blog posts for more information: IE8 speeds things up, Roundup on Parallel Connections, UA Profiler and Google Chrome, and Firefox 3.1: raising the bar.)

Velocity 2008, the first conference focused on web performance and operations, launched June 23-24. Jesse Robbins and I served as co-chairs. This conference, organized by O’Reilly, was densely packed – both in terms of content and people (it sold out!). Speakers included John Allspaw (Flickr), Luiz Barroso (Google), Artur Bergman (Wikia), Paul Colton (Aptana), Stoyan Stefanov (Yahoo!), Mandi Walls (AOL), and representatives from the IE and Firefox teams. Velocity 2009 is slated for June 22-24 in San Jose and we’re currently accepting speaker proposals.
Improving web performance starts with measuring performance. Measurements can come from a test lab using tools like Selenium and Watir. To get measurements from geographically dispersed locations, scripted tests are possible through services like Keynote, Gomez, Webmetrics, and Pingdom. But the best data comes from measuring real user traffic. The basic idea is to measure page load times using JavaScript in the page and beacon back the results. Many web companies have rolled their own versions of this instrumentation. It isn’t that complex to build from scratch, but there are a few gotchas and it’s inefficient for everyone to reinvent the wheel. That’s where Jiffy comes in. Scott Ruthfield and folks from released Jiffy at Velocity 2008. It’s Open Source and easy to use. If you don’t currently have real user load time instrumentation, take a look at Jiffy.
JavaScript: The Good Parts
Moving forward, the key to fast web pages is going to be the quality and performance of JavaScript. JavaScript luminary Doug Crockford helps lights the way with his book JavaScript: The Good Parts, published by O’Reilly. More is needed! We need books and guidelines for performance best practices and design patterns focused on JavaScript. But Doug’s book is a foundation on which to build.
My former colleagues from the Yahoo! Exceptional Performance team, Stoyan Stefanov and Nicole Sullivan, launched In addition to a great name and beautiful web site, packs some powerful functionality. It analyzes the images on a web page and calculates potential savings from various optimizations. Not only that, it creates the optimized versions for download. Try it now!
Google Ajax Libraries API
JavaScript frameworks are powerful and widely used. Dion Almaer and the folks at Google saw an opportunity to help the development community by launching the Ajax Libraries API. This service hosts popular frameworks including jQuery, Prototype,, MooTools, Dojo, and YUI. Web sites using any of these frameworks can reference the copy hosted on Google’s worldwide server network and gain the benefit of faster downloads and cross-site caching. (Original blog post: Google AJAX Libraries API.)
UA Profiler
Okay, I’ll get a plug for my own work in here. UA Profiler looks at browser characteristics that make pages load faster, such as downloading scripts without blocking, max number of open connections, and support for “data:” URLs. The tests run automatically – all you have to do is navigate to the test page from any browser with JavaScript support. The results are available to everyone, regardless of whether you’ve run the tests. I’ve been pleased with the interest in UA Profiler. In some situations it has identified browser regressions that developers have caught and fixed.


Here’s what I think and hope we’ll see in 2009 for web performance.

Visibility into the Browser
Packet sniffers (like HTTPWatch, Fiddler, and WireShark) and tools like YSlow allow developers to investigate many of the “old school” performance issues: compression, Expires headers, redirects, etc. In order to optimize Web 2.0 apps, developers need to see the impact of JavaScript and CSS as the page loads, and gather stats on CPU load and memory consumption. Eric Lawrence and Christian Stockwell’s slides from Velocity 2008 give a hint of what’s possible. Now we need developer tools that show this information.
Think “Web 2.0”
Web 2.0 pages are often developed with a Web 1.0 mentality. In Web 1.0, the amount of CSS, JavaScript, and DOM elements on your page was more tolerable because it would be cleared away with the user’s next action. That’s not the case in Web 2.0. Web 2.0 apps can persist for minutes or even hours. If there are a lot of CSS selectors that have to be parsed with each repaint - that pain is felt again and again. If we include the JavaScript for all possible user actions, the size of JavaScript bloats and increases memory consumption and garbage collection. Dynamically adding elements to the DOM slows down our CSS (more selector matching) and JavaScript (think getElementsByTagName). As developers, we need to develop a new way of thinking about the shape and behavior of our web apps in a way that addresses the long page persistence that comes with Web 2.0.
Speed as a Feature
In my second month at Google I was ecstatic to see the announcement that landing page load time was being incorporated into the quality score used by Adwords. I think we’ll see more and more that the speed of web pages will become more important to users, more important to aggregators and vendors, and subsequently more important to web publishers.
Performance Standards
As the industry becomes more focused on web performance, a need for industry standards is going to arise. Many companies, tools, and services measure “response time”, but it’s unclear that they’re all measuring the same thing. Benchmarks exist for the browser JavaScript engines, but benchmarks are needed for other aspects of browser performance, like CSS and DOM. And current benchmarks are fairly theoretical and extreme. In addition, test suites are needed that gather measurements under more real world conditions. Standard libraries for measuring performance are needed, a la Jiffy, as well as standard formats for saving and exchanging performance data.
JavaScript Help
With the emergence of Web 2.0, JavaScript is the future. The Catch-22 is that JavaScript is one of the biggest performance problems in a web page. Help is needed so JavaScript-heavy web pages can still be fast. One specific tool that’s needed is something that takes a monolithic JavaScript payload and splits into smaller modules, with the necessary logic to know what is needed when. Doloto is a project from Microsoft Research that tackles this problem, but it’s not available publicly. Razor Optimizer attacks this problem and is a good first step, but it needs to be less intrusive to incorporate this functionality.

Browsers also need to make it easier for developers to load JavaScript with less of a performance impact. I’d like to see two new attributes for the SCRIPT tag: DEFER and POSTONLOAD. DEFER isn’t really “new” – IE has had the DEFER attribute since IE 5.5. DEFER is part of the HTML 4.0 specification, and it has been added to Firefox 3.1. One problem is you can’t use DEFER with scripts that utilize document.write, and yet this is critical for mitigating the performance impact of ads. Opera has shown that it’s possible to have deferred scripts still support document.write. This is the model that all browsers should follow for implementing DEFER. The POSTONLOAD attribute would tell the browser to load this script after all other resources have finished downloading, allowing the user to see other critical content more quickly. Developers can work around these issues with more code, but we’ll see wider adoption and more fast pages if browsers can help do the heavy lifting.

Focus on Other Platforms
Most of my work has focused on the desktop browser. Certainly, more best practices and tools are needed for the mobile space. But to stay ahead of where the web is going we need to analyze the user experience in other settings including automobile, airplane, mass transit, and 3rd world. Otherwise, we’ll be playing catchup after those environments take off.
Fast by Default
I enjoy Tom Hanks’ line in A League of Their Own when Geena Davis (“Dottie”) says playing ball got too hard: “It’s supposed to be hard. If it wasn’t hard, everyone would do it. The hard… is what makes it great.” I enjoy a challenge and tackling a hard problem. But doggone it, it’s just too hard to build a high performance web site. The bar shouldn’t be this high. Apache needs to turn off ETags by default. WordPress needs to cache comments better. Browsers need to cache SSL responses when the HTTP headers say to. Squid needs to support HTTP/1.1. The world’s web developers shouldn’t have to write code that anticipates and fixes all of these issues.

Good examples of where we can go are Runtime Page Optimizer and Strangeloop appliances. RPO is an IIS module (and soon Apache) that automatically fixes web pages as they leave the server to minify and combine JavaScript and stylesheets, enable CSS spriting, inline CSS images, and load scripts asynchronously. (Original blog post: Runtime Page Optimizer.) The web appliance from Strangeloop does similar realtime fixes to improve caching and reduce payload size. Imagine combining these tools with and Doloto to automatically improve the performance of web pages. Companies like Yahoo! and Google might need more customized solutions, but for the other 99% of developers out there, it needs to be easier to make pages fast by default.

This is a long post, but I still had to leave out a lot of performance highlights from 2008 and predictions for what lies ahead. I look forward to hearing your comments.


CS193H: final exam

December 16, 2008 10:35 pm | 7 Comments

This past quarter I’ve been teaching CS193H: High Performance Web Sites at Stanford. Last week was the final exam and tonight I finished submitting the grades. (The average grade was 88.) This was a great experience. Stanford is an inspirational place to be. The students are very smart – one of the undergraduates in my class is already working with a VC up on Sandhill Road. As I’ve found before, I learn a lot when I teach. This was especially true given the questions from these students. I’ve never taught an entire quarter before. Teaching three classes per week while developing a new curriculum took a lot of time. I’m thankful to Google for giving me time to do this.

The material from the class is posted on the class web site. Slides from all of my lectures, including material from my next book, can be found there. There are also slides from my amazing guest lecturers:

I’ve posted the midterm and final exams, along with the answers. The slides and tests provide a thorough coverage of web performance. The average grade on the final was 94. Give it a whirl and let me know how you do.


CACM article: High Performance Web Sites

December 15, 2008 11:28 pm | 2 Comments

Last summer I attended the ACM Awards Banquet. (I talked about this in my blog post about how Women are Geeks (too!).) Out of that came a request for me to write an article on web performance. The article is called “High Performance Web Sites”. [gasp!] It’s a review of the rules from my first book, plus a preview of the first three rules from my next book. The article came out last week in two of the ACM’s magazines: Communications of the ACM and Queue.

Communications of the ACM (CACM) is their flagship publication. It’s been around for more than 50 years with an audience of 85,000 readers. It’s a hardcopy magazine, so the link above is a preview copy of my article. There are other versions including HTML and PDF.

Queue is ACM’s online magazine focusing on software development. The issue of Queue containing my article is about Scalable Web Services. Other articles include:

(I didn’t realize I was going to be in with such heavy company.) I’m happy with this article – it’s short, but provides a good overview of my performance best practices and has a bit of evangelism at the close. Give it a read and recommend it to any colleagues who are entering the performance arena.


Velocity 2009: Call for Participation

November 11, 2008 5:05 pm | 2 Comments

Velocity 2009 is officially open! This is the conference that Jesse Robbins and I co-chair. Velocity 2009 is scheduled for June 22-24 at the Fairmont in San Jose. Checkout the new site. Most importantly, submit your speaking proposals on the Call for Participation page. We’ve provided some suggested topics. My favorites:

  • How to tie web performance and operations to the bottom line
  • Profiling JavaScript, CSS, and the network
  • Managing web services – flaming disasters you survived and lessons learned
  • The intersection between performance and design
  • Ads, ads, ads – the performance killer?

Last year’s event sold out with popular sessions that included:

  • members from the IE and Firefox teams talking about browser performance tradeoffs
  • demos of web development tools: Firebug, Fiddler, AOL PageTest, and HTTPWatch
  • case studies from top web apps including Wikipedia, Hotmail, Netflix, Live Search
  • performance optimizations for Ajax and images
  • open source product launches including Jiffy and EUCALYPTUS

Velocity 2009 is going to be even bigger and better. This year’s conference includes an extra day focused on workshops where experts will get into the details of their best practices for performance and operations. More time, more talks, more experts, more people, more takeaways to apply when you get back home. I can’t wait. See you there!


The Art of Capacity Planning

October 17, 2008 11:36 am | Comments Off on The Art of Capacity Planning

This week I opened a beautiful package from O’Reilly. It contained John Allspaw’s new book, The Art of Capacity Planning. As you can see, the cover is a delight to look at. But you shouldn’t judge a book by it’s cover! Luckily, what’s inside the book is also a delight.

Now, let me make a disclaimer. I know John. We first crossed paths at Yahoo!, and have worked together on some side projects, most notably the Velocity conference. I know he’s an expert in the area of capacity planning. I know he’s highly regarded by other leaders in the operations space. I’ve heard him speak at conferences, brainstorm in small group discussions, and share his experiences with others. Given this background, I expected a book full of lessons learned, practical advice, and real world takeaways. Thankfully, for all of us readers, The Art of Capacity Planning delivers all of this and more.

Right out of the gate, John covers a topic near and dear to my heart: metrics. His advice? “Measure, measure, measure.” John reinforces this by including an incredible number of charts throughout the book. He goes on to say that our measurement tools need to provide an easy way to:

  • Record and store data over time
  • Build custom metrics
  • Compare metrics from various sources
  • Import and export metrics

As I read the book, I found myself nodding and thinking, “yes, yes, this is exactly what I learned!” Although it’s been more than five years since I was buildmaster for My Yahoo!, I really resonated with the advice John provides, like this one: “Homogenize hardware to halt headaches”. (You have to love the alliteration, too.)

In a thin book that’s easy to read, John covers a large number of topics. He talks about load testing, with pointers to tools like Httperf and Siege. There are several sections that talk about caching architectures and the use of Squid. He provides guidelines when it comes to deployment, such as making all changes happen in one place, the importance of defining roles and services, and ensuring new servers start working automatically. At the end he even manages to cover virtualization and cloud computing, and how they come into play during capacity planning.

The Art of Capacity Planning is full of sage advice from a seasoned veteran, like this one: “The moral of this little story? When faced with the question of capacity, try to ignore those urges to make existing gear faster, and focus instead on the topic at hand: finding out what you need, and when.” When I read a technical book, I’m really looking for takeaways. That’s why I loved The Art of Capacity Planning, and I think you will, too.

Comments Off on The Art of Capacity Planning

Runtime Page Optimizer

October 12, 2008 6:17 pm | 33 Comments

The Runtime Page Optimizer (RPO) is an exciting product from Aptimize. RPO runs on a web server applying performance optimizations to pages at runtime, just before the page is sent to the browser. RPO automatically implements many of the best practices from my book and YSlow, so the guys from Aptimize contacted me and showed me an early version. Here are the performance improvements RPO delivers:

  • minifies, combines and compresses JavaScript files
  • minifies, combines and compresses stylesheets
  • combines images into CSS sprites
  • inlines images inside the stylesheet
  • turns on gzip compression
  • sets far future Expires headers
  • loads scripts asynchronously

RPO reduces the number of HTTP requests as well as reducing the amount of data that is transmitted, resulting in a page that loads faster. In doing this the big question is, how much overhead does this add at runtime? RPO caches the resources it generates (combined scripts, combined stylesheets, sprites). The primary realtime cost is changing the HTML markup. Static pages, after they are massaged, are also cached. Dynamic HTML can be optimized without a significant slowdown, much less than what’s gained by adding these performance benefits.

RPO is available for SharePoint, ASP.NET and DotNetNuke sites, and they’re working on a version for Apache. This is an exciting step in the world of web performance, shifting the burden of optimization from the web developer to making pages fast by default. They’ve just released RPO, but they’re deploying to at least one global top 1000 web site, so stats should be available soon. I can’t wait to see!


Say “no” to IE6

October 11, 2008 10:06 pm | 22 Comments

IE6 is a pain. It’s slow. It doesn’t behave well. Things that work in other browsers break in IE6. Hours and hours of web developer time is spent just making things work in IE6. Why do web developers spend so much time just to make IE6 work? Because a large percentage of users (22%, 25%, 28%) are still on IE6.

Why are so many people still using IE6?! IE7 has been out for two years now. IE8 is a great improvement over IE7, but I don’t think people delayed installing IE7 because they were waiting for a better browser. There’s some other dynamic at play here. Most people say it’s employees at companies where IT mandates IE6. That’s true. But there are also other opportunities to upgrade from IE6, outside of these IT-controlled environments.

I just came back from The Ajax Experience where developers were talking about the trials and tribulations of getting their web apps to work on IE6, but there were no discussions about how to put an end to this. I remembered someone had mentioned and their “say no to IE6” campaign. I checked it out and added it to my front page and this blog post. If you’re not on IE6, click here to see how it works.

Let’s all start promoting a program like this. Not only will it encourage individuals to upgrade, but it will also apply pressure to those reluctant IT groups. If the code from isn’t right for your site, by all means, code up a different message. We need to start encouraging users to upgrade to newer browsers so they can enjoy a better browsing experience. And sure, maybe we can get a few more hours of sleep, too.