Who’s not getting gzip?
The article Use compression to make the web faster from the Google Code Blog contains some interesting information on why modern browsers that support compression don’t get compressed responses in daily usage. The culprit?
anti-virus software, browser bugs, web proxies, and misconfigured web servers. The first three modify the web request so that the web server does not know that the browser can uncompress content. Specifically, they remove or mangle the Accept-Encoding header that is normally sent with every request.
This is hard to believe, but it’s true. Tony Gentilcore covers the full story in the chapter he wrote called “Going Beyond Gzipping” in my most recent book, including some strategies for correcting and working around the problem. (Check out Tony’s slides from Velocity 2009.) According to Tony:
a large web site in the United States should expect roughly 15% of visitors don’t indicate gzip compression support.
This blog post from Arvind Jain and Jason Glasgow contains additional information, including:
- Users suffering from this problem experience a Google Search page that is 25% slower – 1600ms for compressed content versus 2000ms for uncompressed.
- Google Search was able to force the content to be compressed (even though the browser didn’t request it), and improved page load times by 300ms.
- Internet Explorer 6 downgrades to HTTP/1.0 and drops the Accept-Encoding request header when behind a proxy. For Google Search, 36% of the search results sent without compression were for IE6.
Is there something about your browser, proxy, or anti-virus software that’s preventing you from getting compressed content and slowing you down 25%? Test it out by visiting the browser compression test page.