Roundup on Parallel Connections

March 20, 2008 1:40 am | 36 Comments

A lot of blogging and follow-up discussion ensued with the announcement that IE8 supports six connections per host. The blogs I saw:

It’s likely that Firefox 3 will support 6 connections per server in an upcoming beta release, which means more discussion is expected. I wanted to pull all the facts into one place and make several points that I think are important and interesting. Specifically I talk about:

  • the HTTP/1.1 RFC
  • settings for current browsers
  • upperbound of open connections (cool!)
  • effect of proxies
  • will this break the Internet?

The HTTP/1.1 RFC

Section 8.1.4 of the HTTP/1.1 RFC says a “single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.” The key here is the word “should.” Web clients don’t have to follow this guideline. IE8 isn’t the first to exceed this guideline. Opera and Safari hold that honor supporting 4 connections per server.

It’s important to understand that this is on a per server basis. Using multiple domain names, such as 1.mydomain.com, 2.mydomain.com, 3.mydomain.com, etc., allows a web developer to achieve a multiple of the per server connection limit. This works even if all the domain names are CNAMEs to the same IP address.

 

Settings for Current Browsers

The table below shows the number of connections per server supported by current browsers for HTTP/1.1 as well as HTTP/1.0.

Browser HTTP/1.1 HTTP/1.0
IE 6,7 2 4
IE 8 6 6
Firefox 2 2 8
Firefox 3 6 6
Safari 3,4 4 4
Chrome 1,2 6 ?
Chrome 3 4 4
Chrome 4+ 6 ?
iPhone 2 4 ?
iPhone 3 6 ?
iPhone 4 4 ?
Opera 9.63,10.00alpha 4 4
Opera 10.51+ 8 ?

I provide (some of) the settings for HTTP/1.0 in the table above because some of the blog discussions have confused the connections per server settings for HTTP/1.0 with those for HTTP/1.1. HTTP/1.0 does not have persistant connections so a higher number of connections per server is supported to achieve faster performance. For example, IE7 supports 4 connections per server in HTTP/1.0. In fact, AOL intentionally downgrades their responses to HTTP/1.0 to benefit from this increase in parallelization, although they do it at the cost of losing the benefits of persistent connections. They must have data that supports this decision, but I don’t recommend it.

It’s possible to reconfigure your browser to use different limits. How to configure Internet Explorer to have more than two download sessions describes how the MaxConnectionsPerServer and MaxConnectionsPer1_0Server settings in the Windows Registry control the number of connections per hostname for HTTP/1.1 and HTTP/1.0, respectively. In Firefox these values are controlled by the network.http.max-persistent-connections-per-server and network.http.max-connections-per-server settings in about:config.

Note that IE8 automatically drops back to 2 connections per server for users on dialup connections. Also, web developers can detect the number of connections per server currently in effect by accessing window.maxConnectionsPerServer and window.maxConnectionsPer1_0Server in JavaScript. These are read-only values.

 

Upperbound of Open Connections

What’s the maximum number of connections a browser will open?
This is relevant as server adminstrators prepare for spikes from browsers with increased parallelization.

This Max Connections test page contains 180 images split across 30 hostnames. That works out to 6 images per hostname. To determine the upperbound of open connections a browser supports I loaded this page and counted the number of simultaneous requests in a packet sniffer. Firefox 1.5 and 2.0 open a maximum of 24 connections (2 connections per hostname across 12 hostnames). This limit is imposed by Firefox’s network.http.max-connections setting which defaults to a value of 24.

In IE 6,7&8 I wasn’t able to determine the upperbound. At 2 connections per server, IE 6&7 opened 60 connections in parallel. At 6 connections per server, IE8 opened 180 connections in parallel. I’d have to create more domain names than the 30 I already have to try and find where IE maxes out. (If you load this in other browsers please post your results in a comment below and I’ll update this text.)

 

Effect of Proxies

Note that if you’re behind a proxy (at work, etc.) your download characteristics change. If web clients behind a proxy issued too many simulataneous requests an intelligent web server might interpret that as a DoS attack and block that IP address. Browser developers are aware of this issue and throttle back the number of open connections.

In Firefox the network.http.max-persistent-connections-per-proxy setting has a default value of 4. If you try the Max Connections test page while behind a proxy it loads painfully slowly opening no more than 4 connections at a time to download 180 images. IE8 drops back to 2 connections per server when it’s behind a proxy, so loading the Max Connections test page shows an upperbound of 60 open connections. Keep this in mind if you’re comparing notes with others – if you’re at home and they’re at work you might be seeing different behavior because of a proxy in the middle.

 

Will This Break the Internet?

Much of the debate in the blog comments has been about how IE8’s increase in the number of connections per server might bring those web servers to their knees. I found the most insightful comments in Mozilla’s Bugzilla discussion about increasing Firefox’s number of connections. In comment #22 Boris Zbarsky lays out a good argument for why this increase will have no effect on most servers. But in comment #23 Mike Hommey points out that persistent connections are kept open for longer than the life of the page request. This last point scares me. As someone who has spent many hours configuring Apache to find the right number of child processes across banks of servers, I’m not sure what impact this will have.

Having said that, I’m please that IE8 has taken this step and I’d be even happier if Firefox followed suit. This change in the client will improve page load times from the user’s perspective. It does put the onus on backend developers to watch closely as IE8 adoption grows to see if it affects their capacity planning. But I’ve always believed that part of the responsibility and joy of being a developer is doing extra work on my side that can improve the experience for thousands or millions of users. This is another opportunity to do just that.

36 Responses to Roundup on Parallel Connections

  1. FWIW, HTTP 1.0 CAN support persistent connections and the major browsers fully support it, just most servers didn’t. Using HTTP 1.0 with persistent connections allows AOL to get 4 parallel persistent connections out of browsers that would normally only establish 2.

    You loose being able to use e-tags or other 1.1-specific headers but it works great for static objects from a CDN.

  2. Try lighttpd. It doesn’t seem to have problem even with crazy numbers of persistent connections.

  3. Anyone have data measuring load times for _real users_ downloading pages with several (8+) static resources on HTTP/1.0 vs. HTTP/1.1?

  4. […] do subdomain hacks, css sprites, etc to deal with this. I’m psyched.Steve Souders also has a good wrap up along trying to answer the question of will this break the internet? Posted by Wayne on Friday, […]

  5. Server architectures (e.g. Apache) which are thread-per-connection have a harder time scaling with large numbers of keepalives. Servers which use select/poll loops (e.g. Zeus) usually do not have this problem.

    Zeus wrote a nice paper about the affect of KeepAlives on Apache in this paper: http://www.zeus.com/documents/en/ac/acceleration_apache.pdf

    The paper is really about their proxy product, but ignore that; the analysis of how keepalives affect server performance is sound.

  6. Pseudo parallel feeds such as multiple ‘threads’ for XMLHttpRequest don’t help much unless being fed from different domains. Even then they depend on expanding excess serial capacity in the physical network link towards the client end.

    There is but one serial physical network path into the client’s browser, even if you choose to multiplex that into multiple packet feeds, by whatever method.

    The web in spite of broadband is still, and will always be a serial network; subject to the speed limit of the slowest link, often grinding down below dail-up speeds.

    Add a heavy application, whether it’s an overweight AJAX library, Flash or Java, and you’re lost for a very long time between where you’ve been and where you might get. This gets very boring after about 10 seconds.

    Users simply bail out altogether before about 20 seconds.

    The only real solution is to make your web apps small. Then test them on dial-up bandwidth!

    This isn’t going to change anytime soon. Serial links are not scalable; except by bandwidth improvement or improved compression by orders of magnitude. Although that’s happening, demand for bandwidth is still happening even quicker.

    Many designers get lulled into a sense of false satisfaction with loading times because they work in a situation where they have got excess serial capacity connecting them to the net.

    Go down to your local library or campus where several hundred PC’s share a link to the net and see if you’re still satisfied, and whether 2 or 4 or 6 or even 20 virtual connections make any difference!

    When you’ve got past the great long boring wait, remember that most people see (or don’t see) your latest and greatest wonderful but overweight app that way.

  7. You list max connections per server for opera 9 as 4 but there site says 8:
    http://www.opera.com/support/usingopera/operaini/index.dml#performance
    Is that 8 for http 1.0?

  8. tad – I created a Cuzillion page to test this:

    https://stevesouders.com/cuzillion/?c0=bi2hfff2_0&c1=bi2hfff2_0&c2=bi2hfff2_0&c3=bi2hfff2_0&c4=bi2hfff2_0&c5=bi2hfff2_0&c6=bi2hfff2_0&c7=bi2hfff2_0&c8=bi2hfff2_0

    On Opera 9.27 using the default values (“Max Connections Server” is 8) downloads 4 resources in parallel for a single hostname. If you uncheck “Reduce Max Persistent HTTP Connections” it drops to 2 in parallel. If recheck that and raise “Max Connections Server” to 16 it downloads 8 in parallel. Perhaps this is a terminology gap but the default behavior is 4 in parallel.

  9. I’ve been testing a site and have seen 4 concurrent connections in IE6 to the same domain (there’s no sharding or CDN used, everything comes from the app server)

    To clarify, in the waterfall chart (I’ve checked w. Charles and Fiddler) I see 4 requests going out for images that overlap. The same site in IE7 uses just 2 connections.

    The requests are http 1.1, I’ve checked my registry and afaict its factory default – no tweaks to MaxConnectionsPerServer REG_DWORD (or MaxConnectionsPer1_0Server REG_DWORD)

    I’m not sure if this is a test environment anomally, or a clever trick I need to know about!

  10. @Sam: Can you provide a URL?

  11. @Steve,

    I’m looking for a forum/group/mailing-list to discuss this stuff, do you have/know of one?

    /Sam

  12. @Sam: I tested that URL in IE and only see two parallel connections. Keep in mind that Charles and Fiddler are proxies, and change the download behavior of browsers. (See the section above on “Effect of Proxies”.) You should subscribe to Exceptional Performance in Yahoo! Groups. It’s a great mailing list for these types of questions.

  13. @Steve: thank you and thank you. I’m not clear how/why a proxy would have this effect on IE6, but that uncertainty exists at all there is enough. I’m signed up to the Exceptional Perf. group, see you there.

  14. I found that using Fiddler would actually defeat adding more concurrent connections by adding sub domains.

    I ran into this in my debugging. I had a web app that sends requests to two sub domains for the purpose of increasing concurrent connections to 4 for IE7. But in Fiddler, I found the max concurrent connections are only 2, meaning two requests to a.foo.com will block a request to b.foo.com.

    My interpretation of this phenomenon is Fiddler serves as a proxy to IE7.
    Therefore, when Fiddler is used, all HTTP requests, no matter which domains
    they are sent to, are forced to the same Fiddler proxy destination. Therefore,
    they are all counted towards the 2 max concurrent connections limit.

  15. Thanks Steve for all these wonderful posts and the HPWS book.

    In the post above, when you say “parallel connections”, its actually “parallel persistent connections” – right?

    Firefox 3.5 allows max 15 connections per server (which are the “Connection:close” types), and max 6 “persistent” connections per server (“connection:keep-alive”). Since the max number of non “keep-alive” connections are more, is there a strong reason to go for persistent connections?

    If I serve 15 resources from a domain for my webpage, getting them all in parallel with “connection:close” should be faster than fetching them 6 at a time, with keep-alive tcp connections. Isn’t it?

  16. @Manjusha: Yes, Firefox 3+ allows 6 _persistent_ connections per hostname. You generally want to do persistent connections. It’s likely slower to establish 15 connections than to re-use 6 connections. Exact results depends on the user’s network conditions. At some point, no matter how many connections you open in parallel, there’s a limit to the available bandwidth capacity. Also, latency and packet loss come into play.

  17. Hey Steve.

    I read this part of your above post ….

    “Using multiple domain names, such as 1.mydomain.com, 2.mydomain.com, 3.mydomain.com, etc., allows a web developer to achieve a multiple of the per server connection limit. This works even if all the domain names are CNAMEs to the same IP address.”

    …. and thought – what a simple, but wonderful approach. I’ve been using subdomains to serve images & assetts, but the above approach – particularly using multi sub-domains w/ CNAME – sounds much more efficient. However, I’m having some trouble configing it on my server so just wanted to clarify my understanding of it.

    1) Can I avoid having to physically add sub-domains to my primary domain by using CNAMES to the domain IP?

    2) So if 1.mydomain.com is a CNAME to domain.com, I can link a file as such 1.mydomain.com/filename where it actually physically resides on mydomain.com/images?

    3) Does the CNAME have to be IP, or can it be the domain name?

  18. Or if anyone else can pass some advice, please do!

  19. @Cracks: The answers depend on your domain registrar and hosting service.

  20. The max-connections page only works for me in FF. All other browsers are mostly showing broken images (“Service Unavailable” according to Safari’s Activity Window). Anyone know why? Is FF perhaps retrying?

  21. @Christian: It appears my web hosting company has put a connection limit on my server. I’ll have to find another server to host resources. Stay tuned.

  22. By any chance anyone knows the limit of connections when using iphones? Max connections per host but also upper limit on max connections?

    This webpage is great. I am glad i found this resource cited in research papers :-D

  23. You can see that info for iPhone and hundreds of other browsers in Browserscope.

  24. @Steve,
    Thank you very much for your reference to browserscope. Surprisingly, most o these OS and browser specific details have a very big impact on wireless browsing, which in general are not optimized neither to have a very large instantaneous throughput available nor to expect so many TCP connections per client for each web browsing session.

    Regards
    -Andreas

  25. n the post above, when you say “parallel connections”, its actually “parallel persistent connections” – right?

    Firefox allows max 15 connections per server (which are the “Connection:close” types), and max 6 “persistent” connections per server (“connection:keep-alive”). Since the max number of non “keep-alive” connections are more, is there a strong reason to go for persistent connections?

    If I serve 15 resources from a domain for my webpage, getting them all in parallel with “connection:close” should be faster than fetching them 6 at a time, with keep-alive tcp connections. Isn’t it?

  26. @John Rossen: There’s overhead in establishing a new connection. The studies I did at Yahoo! found that too many simultaneous connections worsens performance. In most cases, it would be better to use fewer persistent connections.

  27. Hey Steve,

    Almost two years ago, HTTPbis removed the connection limit, after discussion with browser vendors:
    http://trac.tools.ietf.org/wg/httpbis/trac/ticket/131

    The problem with using too many connections isn’t really server load; dealing with large numbers of conns server-side is fairly well-understood these days. The real problem is its effect on congestion control. E.g., if your browser uses 8 connections each to 10 sharded hosts, and the initial congestion window is a conservative 3, it means that about a third of a megabyte can be in flight, all at once, causing havoc for other applications.

    Jim Gettys was railing on this subject this week at IETF Prague; in combination with bufferbloat, it is unfriendly to other applications, as well as potentially worsening browse performance.

    Off to bed here,

  28. Hah – just realised this is a really old post (about two years old!). You mentioned it on twitter and I assumed it was new. D’oh.

  29. @steve
    In IE6, would these 3 URL be considered as distinct from the max connection perspective ?

    http://server1.xx?cgi (which map to 199.199.199.199)
    http://199.199.199.199?cgi
    http://?cgi (from a html which is on server1.xx)

    Thanks.

  30. @MNot: My attempt to highlight good blog posts from the past. ;-)

    @Jean-Philippe: I’m pretty sure request 1&3 would be considered the same hostname.

  31. I have a custom media server whose job is to source MJPEG video streams from multiple cameras. It is intended for viewing several dozen cameras simultaneously at high frame rates in a single page. I have tested it with the latest Firefox with 8 cameras simultaneously in a single page, even when I open several additional video streams, and it is OK. But there is blocking in Chrome and Safari when I try to open additional pages using the same instance of the browser. There is a continuous connection for each camera since the video is live and therefore continuous.

    Is the problem with Chrome and Safari a result of the limit of the maximum number of connections per server ? But the tables show Firefox supports only 6, whereas more than 8 is OK.

  32. If the problem is due to the maximum number of connections per server, how can I circumvent it ? Can I have multiple ports to access a given media server, and distribute the load from a given page over them ? Would using different port numbers make my media server appear to be multiple servers ?

    Isn’t Firefox enforcing the limit ?

  33. Ah hah !

    I forced my test page to only give 4 live streams instead of 8, and it works in Chrome and Safari. I then opened 2 additional streams, for a total of 6, and it’s ok. But a 7th will not open. So Chrome and Safari are enforcing the 6-connection limit. I double checked this by closing one active stream, at which point the blocked 7th window showed video.

    How do I defeat this limit ???

  34. @Boundless: You could try using different hostnames such as www1., www2., etc.

  35. BTW, great round up of in-the-wild browser connection limits at http://www.browserscope.org/?category=network&v=1