Firefox 3.5 at the top

June 30, 2009 8:23 am | 21 Comments

The web world is abuzz today with the release of Firefox 3.5. On the launch page, Mozilla touts the results of running SunSpider. Over on UA Profiler, I’ve developed a different set of tests that count the number of critical performance features browsers do, or don’t, have. Currently, there are 11 traits that are measured. Firefox 3.5 scores higher than any other browser with 10 out of 11 of the performance features browsers need to create a fast user experience.

Firefox 3.5 is a significant improvement over Firefox 3.0, climbing from 7/11 to 10/11 of these performance traits. Among the major browsers, Firefox 3.5 is followed by Chrome 2 (9/11), Safari 4 (8/11), IE 8 (7/11), and Opera 10 (6/11). Unfortunately, IE 6 and 7 have only 4 out of these 11 performance features, a sad state of affairs for today’s web developers and users.

The performance traits measured by UA Profiler include number of connections per hostname, maximum number of connections, parallel loading of scripts and stylesheets, proper caching of resources including redirects, the LINK PREFETCH attribute, and support for data: URLs. When I started UA Profiler, none of the browsers were scoring very high. But there’s great progress in the last year. It’s time to raise the bar! I plan on adding more tests to UA Profiler this summer, and hope the browser development teams will continue to rise to the challenge in an effort to make the Web a faster place for all of us.

21 Comments

don’t use @import

April 9, 2009 12:32 am | 90 Comments

In Chapter 5 of High Performance Web Sites, I briefly mention that @import has a negative impact on web page performance. I dug into this deeper for my talk at Web 2.0 Expo, creating several test pages and HTTP waterfall charts, all shown below. The bottomline is: use LINK instead of @import if you want stylesheets to download in parallel resulting in a faster page.

LINK vs. @import

There are two ways to include a stylesheet in your web page. You can use the LINK tag:

<link rel='stylesheet' href='a.css'>

Or you can use the @import rule:

<style>
@import url('a.css');
</style>

I prefer using LINK for simplicity—you have to remember to put @import at the top of the style block or else it won’t work. It turns out that avoiding @import is better for performance, too.

@import @import

I’m going to walk through the different ways LINK and @import can be used. In these examples, there are two stylesheets: a.css and b.css. Each stylesheet is configured to take two seconds to download to make it easier to see the performance impact. The first example uses @import to pull in these two stylesheets. In this example, called @import @import, the HTML document contains the following style block:

<style>
@import url('a.css');
@import url('b.css');
</style>

If you always use @import in this way, there are no performance problems, although we’ll see below it could result in JavaScript errors due to race conditions. The two stylesheets are downloaded in parallel, as shown in Figure 1. (The first tiny request is the HTML document.) The problems arise when @import is embedded in other stylesheets or is used in combination with LINK.

Figure 1

Figure 1. always using @import is okay

LINK @import

The LINK @import example uses LINK for a.css, and @import for b.css:

<link rel='stylesheet' type='text/css' href='a.css'>
<style>
@import url('b.css');
</style>

In IE (tested on 6, 7, and 8), this causes the stylesheets to be downloaded sequentially, as shown in Figure 2. Downloading resources in parallel is key to a faster page. As shown here, this behavior in IE causes the page to take a longer time to finish.

Figure 2. link mixed with @import breaks parallel downloads in IE

Figure 2. link mixed with @import breaks parallel downloads in IE

LINK with @import

In the LINK with @import example, a.css is inserted using LINK, and a.css has an @import rule to pull in b.css:

in the HTML document:
<link rel='stylesheet' type='text/css' href='a.css'>
in a.css:
@import url('b.css');

This pattern also prevents the stylesheets from loading in parallel, but this time it happens on all browsers. When we stop and think about it, we shouldn’t be too surprised. The browser has to download a.css and parse it. At that point, the browser sees the @import rule and starts to fetch b.css.

using @import from within a LINKed stylesheet breaks parallel downloads in all browsers

Figure 3. using @import from within a LINKed stylesheet breaks parallel downloads in all browsers

LINK blocks @import

A slight variation on the previous example with surprising results in IE: LINK is used for a.css and for a new stylesheet called proxy.css. proxy.css is configured to return immediately; it contains an @import rule for b.css.

in the HTML document:
<link rel='stylesheet' type='text/css' href='a.css'>
<link rel='stylesheet' type='text/css' href='proxy.css'>
in proxy.css:
@import url('b.css');

The results of this example in IE, LINK blocks @import, are shown in Figure 4. The first request is the HTML document. The second request is a.css (two seconds). The third (tiny) request is proxy.css. The fourth request is b.css (two seconds). Surprisingly, IE won’t start downloading b.css until a.css finishes. In all other browsers, this blocking issue doesn’t occur, resulting in a faster page as shown in Figure 5.

Figure 4. LINK blocks @import embedded in other stylesheets in IE

Figure 4. LINK blocks @import embedded in other stylesheets in IE

Figure 5. LINK doesnt block @import embedded stylesheets in browsers other than IE

Figure 5. LINK doesn't block @import embedded stylesheets in browsers other than IE

many @imports

The many @imports example shows that using @import in IE causes resources to be downloaded in a different order than specified. This example has six stylesheets (each takes two seconds to download) followed by a script (a four second download).

<style>
@import url('a.css');
@import url('b.css');
@import url('c.css');
@import url('d.css');
@import url('e.css');
@import url('f.css');
</style>
<script src='one.js' type='text/javascript'></script>

Looking at Figure 6, the longest bar is the four second script. Even though it was listed last, it gets downloaded first in IE. If the script contains code that depends on the styles applied from the stylesheets (a la getElementsByClassName, etc.), then unexpected results may occur because the script is loaded before the stylesheets, despite the developer listing it last.

Figure 6. @import causes resources to be downloaded out-of-order in IE

Figure 6. @import causes resources to be downloaded out-of-order in IE

LINK LINK

It’s simpler and safer to use LINK to pull in stylesheets:

<link rel='stylesheet' type='text/css' href='a.css'>
<link rel='stylesheet' type='text/css' href='b.css'>

Using LINK ensures that stylesheets will be downloaded in parallel across all browsers. The LINK LINK example demonstrates this, as shown in Figure 7. Using LINK also guarantees resources are downloaded in the order specified by the developer.

Figure 7. using link ensures parallel downloads across all browsers

Figure 7. using link ensures parallel downloads across all browsers

These issues need to be addressed in IE. It’s especially bad that resources can end up getting downloaded in a different order. All browsers should implement a small lookahead when downloading stylesheets to extract any @import rules and start those downloads immediately. Until browsers make these changes, I recommend avoiding @import and instead using LINK for inserting stylesheets.

Update: April 10, 2009 1:07 PM

Based on questions from the comments, I added two more tests: LINK with @imports and Many LINKs. Each of these insert four stylesheets into the HTML document. LINK with @imports uses LINK to load proxy.css; proxy.css then uses @import to load the four stylesheets. Many LINKs has four LINK tags in the HTML document to pull in the four stylesheets (my recommended approach). The HTTP waterfall charts are shown in Figure 8 and Figure 9.

Figure 8. LINK with @imports

Figure 8. LINK with @imports

Figure 9. Many LINKs

Figure 9. Many LINKs

Looking at LINK with @imports, the first problem is that the four stylesheets don’t start downloading until after proxy.css returns. This happens in all browsers. On the other hand, Many LINKs starts downloading the stylesheets immediately.

The second problem is that IE changes the download order. I added a 10 second script (the really long bar) at the very bottom of the page. In all other browsers, the @import stylesheets (from proxy.css) get downloaded first, and the script is last, exactly the order specified. In IE, however, the script gets inserted before the @import stylesheets, as shown by LINK with @imports in Figure 8. This causes the stylesheets to take longer to download since the long script is using up one of only two connections available in IE 6&7. Since IE won’t render anything in the page until all stylesheets are downloaded, using @import in this way causes the page to be blank for 12 seconds. Using LINK instead of @import preserves the load order, as shown by Many LINKs in Figure 9. Thus, the page renders in 4 seconds.

The load times of these resources are exaggerated to make it easy to see what’s happening. But for people with slow connections, especially those in some of the world’s emerging markets, these response times may not be that far from reality. The takeaways are:

  • Using @import within a stylesheet adds one more roundtrip to the overall download time of the page.
  • Using @import in IE causes the download order to be altered. This may cause stylesheets to take longer to download, which hinders progress rendering making the page feel slower.

90 Comments

Performance Impact of CSS Selectors

March 10, 2009 11:28 pm | 53 Comments

A few months back there were some posts about the performance impact of inefficient CSS selectors. I was intrigued – this is the kind of browser idiosyncratic behavior that I live for. On further investigation, I’m not so sure that it’s worth the time to make CSS selectors more efficient. I’ll go even farther and say I don’t think anyone would notice if we woke up tomorrow and every web page’s CSS selectors were magically optimized.

The first post that caught my eye was about CSS Qualified Selectors by Shaun Inman. This post wasn’t actually about CSS performance, but in one of the comments David Hyatt (architect for Safari and WebKit, also worked on Mozilla, Camino, and Firefox) dropped this bomb:

The sad truth about CSS3 selectors is that they really shouldn’t be used at all if you care about page performance. Decorating your markup with classes and ids and matching purely on those while avoiding all uses of sibling, descendant and child selectors will actually make a page perform significantly better in all browsers.

Wow. Let me say that again. Wow.

The next posts were amazing. It was a series on Testing CSS Performance from Jon Sykes in three parts: part 1, part 2, and part 3. It’s fun to see how his tests evolve, so part 3 is really the one to read. This had me convinced that optimizing CSS selectors was a key step to fast pages.

But there were two things about the tests that troubled me. First, the large number of DOM elements and rules worried me. The pages contain 60,000 DOM elements and 20,000 CSS rules. This is an order of magnitude more than most pages. Pages this large make browsers behave in unusual ways (we’ll get back to that later). The table below has some stats from the top ten U.S. web sites for comparison.

Web Site # CSS Rules
#DOM Elements
AOL 2289 1628
eBay 305 588
Facebook 2882 1966
Google 92 552
Live Search 376 449
MSN 1038 886
MySpace 932 444
Wikipedia 795 1333
Yahoo! 800 564
YouTube 821 817
average 1033 923

The second thing that concerned me was how small the baseline test page was, compared to the more complex pages. The main question I want to answer is “do inefficient CSS selectors slow down pages?” All five test pages contain 20,000 anchor elements (nested inside P, DIV, DIV, DIV). What changes is their CSS: baseline (no CSS), tag selector (one rule for the A tag), 20,000 class selectors, 20,000 child selectors, and finally 20,000 descendant selectors. The last three pages top out at over 3 megabytes in size. But the baseline page and tag selector page, with little or no CSS, are only 1.8 megabytes. These pages answer the question “how much faster would my page be if I eliminated all CSS?” But not many of us are going to eliminate all CSS from our pages.

I revised the test as follows:

  • 2000 anchors and 2000 rules (instead of 20,000) – this actually results in ~6000 DOM elements because of all the nesting in P, DIV, DIV, DIV
  • the baseline page and tag selector page have 2000 rules just like all the other pages, but these are simple class rules that don’t match any classes in the page

I ran these tests on 12 browsers. Page render time was measured with a script block at the top and bottom of the page. (I loaded the page from local disk to avoid possible impact from chunked encoding.) The results are shown in the chart below. (I don’t show Opera 9.63 – it was way too slow – but you can download all the data as csv. You can also see the test pages.)

Performance varies across browsers; strangely, two new browsers, IE 8 and Firefox 3.1, are the slowest but comparisons should not be made from one browser to another. Although all the tests for a given browser were conducted on a single PC, different browsers might have been tested on different PCs with different performance characteristics. The goal of this experiment is not to compare browser performance – it’s to see how browsers handle progressively more complex CSS selectors.

[Revision: On further inspection comparing Firefox 3.0 and 3.1, I discovered that the test PC I used for testing Firefox 3.1 and IE 8 was slower than the other test PCs used in this experiment. I subsequently re-ran those tests as well as Firefox 3.0 and IE 7 on PCs that were more consistent and updated the chart above. Even with this re-run, because of possible differences in test hardware, do not use this data to compare one browser to another.]

Not surprisingly, the more complex pages (child selectors and descendant selectors) usually perform the worst. The biggest surprise is how small the delta is from the baseline to the most complex, worst performing test page. The average slowdown across all browsers is 50 ms, and if we look at the big ones (IE 6&7, FF3), the average delta is just 20 ms. For 70% or more of today’s users, improving these CSS selectors would only make a 20 ms improvement.

Keep in mind – these test pages are close to worst case. The 2000 anchors wrapped in P, DIV, DIV, DIV result in 6000 DOM elements – that’s twice as big as the max in the top ten sites. And the complex pages have 2000 extremely inefficient rules – a typical site has around one third of their rules that are complex child or descendant selectors. Facebook, for example, with the maximum number of rules at 2882 only has 750 that are these extremely inefficient rules.

Why do the results from my tests suggest something different from what’s been said lately? One difference comes from looking at things at such a large scale. It’s okay to exaggerate test cases if the results are proportional to common use cases. But in this case, browsers behave differently when confronted with a 3 megabyte page with 60,000 elements and 20,000 rules. I especially noticed that my results were much different for IE 6&7. I wondered if there was a hockey stick in how IE handled CSS selectors. To investigate this I loaded the child selector and descendant selector pages with increasing number of anchors and rules, from 1000 to 20,000. The results, shown in the chart below, reveal that IE hits a cliff around 18,000 rules. But when IE 6&7 work on a page that is closer to reality, as in my tests, they’re actually the fastest performers.

Based on these tests I have the following hypothesis: For most web sites, the possible performance gains from optimizing CSS selectors will be small, and are not worth the costs. There are some types of CSS rules and interactions with JavaScript that can make a page noticeably slower. This is where the focus should be. So I’m starting to collect real world examples of small CSS style-related issues (offsetWidth, :hover) that put the hurt on performance. If you have some, send them my way. I’m speaking at SXSW this weekend. If you’re there, and want to discuss CSS selectors, please find me. It’s important that we’re all focusing on the performance improvements that our users will really notice.

53 Comments

State of Performance 2008

December 17, 2008 8:08 pm | 3 Comments

My Stanford class, CS193H High Performance Web Sites, ended last week. The last lecture was called “State of Performance”. This was my review of what happened in 2008 with regard to web performance, and my predictions and hopes for what we’ll see in 2009. You can see the slides (ppt, GDoc), but I wanted to capture the content here with more text. Let’s start with a look back at 2008.

2008

Year of the Browser
This was the year of the browser. Browser vendors have put users first, competing to make the web experience faster with each release. JavaScript got the most attention. WebKit released a new interpreter called Squirrelfish. Mozilla came out with Tracemonkey – Spidermonkey with Trace Trees added for Just-In-Time (JIT) compilation. Google released Chrome with the new V8 JavaScript engine. And new benchmarks have emerged to put these new JavaScript engines through the paces: Sunspider, V8 Benchmark, and Dromaeo.

In addition to JavaScript improvements, browser performance was boosted in terms of how web pages are loaded. IE8 Beta came out with parallel script loading, where the browser continues parsing the HTML document while downloading external scripts, rather than blocking all progress like most other browsers. WebKit, starting with version 526, has a similar feature, as does Chrome 0.5 and the most recent Firefox nightlies (Minefield 3.1). On a similar note, IE8 and Firefox 3 both increased the number of connections opened per server from 2 to 6. (Safari and Opera were already ahead with 4 connections per server.) (See my original blog posts for more information: IE8 speeds things up, Roundup on Parallel Connections, UA Profiler and Google Chrome, and Firefox 3.1: raising the bar.)

Velocity
Velocity 2008, the first conference focused on web performance and operations, launched June 23-24. Jesse Robbins and I served as co-chairs. This conference, organized by O’Reilly, was densely packed – both in terms of content and people (it sold out!). Speakers included John Allspaw (Flickr), Luiz Barroso (Google), Artur Bergman (Wikia), Paul Colton (Aptana), Stoyan Stefanov (Yahoo!), Mandi Walls (AOL), and representatives from the IE and Firefox teams. Velocity 2009 is slated for June 22-24 in San Jose and we’re currently accepting speaker proposals.
Jiffy
Improving web performance starts with measuring performance. Measurements can come from a test lab using tools like Selenium and Watir. To get measurements from geographically dispersed locations, scripted tests are possible through services like Keynote, Gomez, Webmetrics, and Pingdom. But the best data comes from measuring real user traffic. The basic idea is to measure page load times using JavaScript in the page and beacon back the results. Many web companies have rolled their own versions of this instrumentation. It isn’t that complex to build from scratch, but there are a few gotchas and it’s inefficient for everyone to reinvent the wheel. That’s where Jiffy comes in. Scott Ruthfield and folks from Whitepages.com released Jiffy at Velocity 2008. It’s Open Source and easy to use. If you don’t currently have real user load time instrumentation, take a look at Jiffy.
JavaScript: The Good Parts
Moving forward, the key to fast web pages is going to be the quality and performance of JavaScript. JavaScript luminary Doug Crockford helps lights the way with his book JavaScript: The Good Parts, published by O’Reilly. More is needed! We need books and guidelines for performance best practices and design patterns focused on JavaScript. But Doug’s book is a foundation on which to build.
smush.it
My former colleagues from the Yahoo! Exceptional Performance team, Stoyan Stefanov and Nicole Sullivan, launched smush.it. In addition to a great name and beautiful web site, smush.it packs some powerful functionality. It analyzes the images on a web page and calculates potential savings from various optimizations. Not only that, it creates the optimized versions for download. Try it now!
Google Ajax Libraries API
JavaScript frameworks are powerful and widely used. Dion Almaer and the folks at Google saw an opportunity to help the development community by launching the Ajax Libraries API. This service hosts popular frameworks including jQuery, Prototype, script.aculo.us, MooTools, Dojo, and YUI. Web sites using any of these frameworks can reference the copy hosted on Google’s worldwide server network and gain the benefit of faster downloads and cross-site caching. (Original blog post: Google AJAX Libraries API.)
UA Profiler
Okay, I’ll get a plug for my own work in here. UA Profiler looks at browser characteristics that make pages load faster, such as downloading scripts without blocking, max number of open connections, and support for “data:” URLs. The tests run automatically – all you have to do is navigate to the test page from any browser with JavaScript support. The results are available to everyone, regardless of whether you’ve run the tests. I’ve been pleased with the interest in UA Profiler. In some situations it has identified browser regressions that developers have caught and fixed.

2009

Here’s what I think and hope we’ll see in 2009 for web performance.

Visibility into the Browser
Packet sniffers (like HTTPWatch, Fiddler, and WireShark) and tools like YSlow allow developers to investigate many of the “old school” performance issues: compression, Expires headers, redirects, etc. In order to optimize Web 2.0 apps, developers need to see the impact of JavaScript and CSS as the page loads, and gather stats on CPU load and memory consumption. Eric Lawrence and Christian Stockwell’s slides from Velocity 2008 give a hint of what’s possible. Now we need developer tools that show this information.
Think “Web 2.0”
Web 2.0 pages are often developed with a Web 1.0 mentality. In Web 1.0, the amount of CSS, JavaScript, and DOM elements on your page was more tolerable because it would be cleared away with the user’s next action. That’s not the case in Web 2.0. Web 2.0 apps can persist for minutes or even hours. If there are a lot of CSS selectors that have to be parsed with each repaint - that pain is felt again and again. If we include the JavaScript for all possible user actions, the size of JavaScript bloats and increases memory consumption and garbage collection. Dynamically adding elements to the DOM slows down our CSS (more selector matching) and JavaScript (think getElementsByTagName). As developers, we need to develop a new way of thinking about the shape and behavior of our web apps in a way that addresses the long page persistence that comes with Web 2.0.
Speed as a Feature
In my second month at Google I was ecstatic to see the announcement that landing page load time was being incorporated into the quality score used by Adwords. I think we’ll see more and more that the speed of web pages will become more important to users, more important to aggregators and vendors, and subsequently more important to web publishers.
Performance Standards
As the industry becomes more focused on web performance, a need for industry standards is going to arise. Many companies, tools, and services measure “response time”, but it’s unclear that they’re all measuring the same thing. Benchmarks exist for the browser JavaScript engines, but benchmarks are needed for other aspects of browser performance, like CSS and DOM. And current benchmarks are fairly theoretical and extreme. In addition, test suites are needed that gather measurements under more real world conditions. Standard libraries for measuring performance are needed, a la Jiffy, as well as standard formats for saving and exchanging performance data.
JavaScript Help
With the emergence of Web 2.0, JavaScript is the future. The Catch-22 is that JavaScript is one of the biggest performance problems in a web page. Help is needed so JavaScript-heavy web pages can still be fast. One specific tool that’s needed is something that takes a monolithic JavaScript payload and splits into smaller modules, with the necessary logic to know what is needed when. Doloto is a project from Microsoft Research that tackles this problem, but it’s not available publicly. Razor Optimizer attacks this problem and is a good first step, but it needs to be less intrusive to incorporate this functionality.

Browsers also need to make it easier for developers to load JavaScript with less of a performance impact. I’d like to see two new attributes for the SCRIPT tag: DEFER and POSTONLOAD. DEFER isn’t really “new” – IE has had the DEFER attribute since IE 5.5. DEFER is part of the HTML 4.0 specification, and it has been added to Firefox 3.1. One problem is you can’t use DEFER with scripts that utilize document.write, and yet this is critical for mitigating the performance impact of ads. Opera has shown that it’s possible to have deferred scripts still support document.write. This is the model that all browsers should follow for implementing DEFER. The POSTONLOAD attribute would tell the browser to load this script after all other resources have finished downloading, allowing the user to see other critical content more quickly. Developers can work around these issues with more code, but we’ll see wider adoption and more fast pages if browsers can help do the heavy lifting.

Focus on Other Platforms
Most of my work has focused on the desktop browser. Certainly, more best practices and tools are needed for the mobile space. But to stay ahead of where the web is going we need to analyze the user experience in other settings including automobile, airplane, mass transit, and 3rd world. Otherwise, we’ll be playing catchup after those environments take off.
Fast by Default
I enjoy Tom Hanks’ line in A League of Their Own when Geena Davis (“Dottie”) says playing ball got too hard: “It’s supposed to be hard. If it wasn’t hard, everyone would do it. The hard… is what makes it great.” I enjoy a challenge and tackling a hard problem. But doggone it, it’s just too hard to build a high performance web site. The bar shouldn’t be this high. Apache needs to turn off ETags by default. WordPress needs to cache comments better. Browsers need to cache SSL responses when the HTTP headers say to. Squid needs to support HTTP/1.1. The world’s web developers shouldn’t have to write code that anticipates and fixes all of these issues.

Good examples of where we can go are Runtime Page Optimizer and Strangeloop appliances. RPO is an IIS module (and soon Apache) that automatically fixes web pages as they leave the server to minify and combine JavaScript and stylesheets, enable CSS spriting, inline CSS images, and load scripts asynchronously. (Original blog post: Runtime Page Optimizer.) The web appliance from Strangeloop does similar realtime fixes to improve caching and reduce payload size. Imagine combining these tools with smush.it and Doloto to automatically improve the performance of web pages. Companies like Yahoo! and Google might need more customized solutions, but for the other 99% of developers out there, it needs to be easier to make pages fast by default.

This is a long post, but I still had to leave out a lot of performance highlights from 2008 and predictions for what lies ahead. I look forward to hearing your comments.

3 Comments

Say “no” to IE6

October 11, 2008 10:06 pm | 22 Comments

IE6 is a pain. It’s slow. It doesn’t behave well. Things that work in other browsers break in IE6. Hours and hours of web developer time is spent just making things work in IE6. Why do web developers spend so much time just to make IE6 work? Because a large percentage of users (22%, 25%, 28%) are still on IE6.

Why are so many people still using IE6?! IE7 has been out for two years now. IE8 is a great improvement over IE7, but I don’t think people delayed installing IE7 because they were waiting for a better browser. There’s some other dynamic at play here. Most people say it’s employees at companies where IT mandates IE6. That’s true. But there are also other opportunities to upgrade from IE6, outside of these IT-controlled environments.

I just came back from The Ajax Experience where developers were talking about the trials and tribulations of getting their web apps to work on IE6, but there were no discussions about how to put an end to this. I remembered someone had mentioned SaveTheDevelopers.org and their “say no to IE6” campaign. I checked it out and added it to my front page and this blog post. If you’re not on IE6, click here to see how it works.

Let’s all start promoting a program like this. Not only will it encourage individuals to upgrade, but it will also apply pressure to those reluctant IT groups. If the code from SaveTheDevelopers.org isn’t right for your site, by all means, code up a different message. We need to start encouraging users to upgrade to newer browsers so they can enjoy a better browsing experience. And sure, maybe we can get a few more hours of sleep, too.

22 Comments

Velocity Wrap-up

June 26, 2008 10:51 am | 3 Comments

This week I co-chaired Velocity, the web performance and operations conference from O’Reilly. It was great! Jesse and I told the story about how the conference came about. When we proposed the conference we believed there was a community of performance and operations engineers that needed a forum to share and learn, and the attendance at Velocity confirmed this. Velocity sold out with over 600 attendees!

The lineup of speakers was great. There was a lot of material packed in a 2-day conference. I stayed in the Performance track, but wanted to attend every session in the Operations track, too. Many speakers shared their slides, and there are videos and photos from some of the talks.

(more…)

3 Comments

Roundup on Parallel Connections

March 20, 2008 1:40 am | 36 Comments

A lot of blogging and follow-up discussion ensued with the announcement that IE8 supports six connections per host. The blogs I saw:

It’s likely that Firefox 3 will support 6 connections per server in an upcoming beta release, which means more discussion is expected. I wanted to pull all the facts into one place and make several points that I think are important and interesting. Specifically I talk about:

  • the HTTP/1.1 RFC
  • settings for current browsers
  • upperbound of open connections (cool!)
  • effect of proxies
  • will this break the Internet?

(more…)

36 Comments

IE8 speeds things up

March 10, 2008 9:30 am | 24 Comments

IE8 Beta 1 has several performance improvements listed in the release notes. Many of these improvements center around the DOM and JavaScript execution. And their announcement about stricter standards compliance is a great move forward. There are three changes that are big and relate to my performance best practices: 6 downloads per hostname, loading scripts in parallel (the biggest improvement!), and support for data: URIs.

(more…)

24 Comments