Performance Impact of CSS Selectors
A few months back there were some posts about the performance impact of inefficient CSS selectors. I was intrigued – this is the kind of browser idiosyncratic behavior that I live for. On further investigation, I’m not so sure that it’s worth the time to make CSS selectors more efficient. I’ll go even farther and say I don’t think anyone would notice if we woke up tomorrow and every web page’s CSS selectors were magically optimized.
The first post that caught my eye was about CSS Qualified Selectors by Shaun Inman. This post wasn’t actually about CSS performance, but in one of the comments David Hyatt (architect for Safari and WebKit, also worked on Mozilla, Camino, and Firefox) dropped this bomb:
The sad truth about CSS3 selectors is that they really shouldn’t be used at all if you care about page performance. Decorating your markup with classes and ids and matching purely on those while avoiding all uses of sibling, descendant and child selectors will actually make a page perform significantly better in all browsers.
Wow. Let me say that again. Wow.
The next posts were amazing. It was a series on Testing CSS Performance from Jon Sykes in three parts: part 1, part 2, and part 3. It’s fun to see how his tests evolve, so part 3 is really the one to read. This had me convinced that optimizing CSS selectors was a key step to fast pages.
But there were two things about the tests that troubled me. First, the large number of DOM elements and rules worried me. The pages contain 60,000 DOM elements and 20,000 CSS rules. This is an order of magnitude more than most pages. Pages this large make browsers behave in unusual ways (we’ll get back to that later). The table below has some stats from the top ten U.S. web sites for comparison.
Web Site | # CSS Rules |
#DOM Elements |
AOL | 2289 | 1628 |
eBay | 305 | 588 |
2882 | 1966 | |
92 | 552 | |
Live Search | 376 | 449 |
MSN | 1038 | 886 |
MySpace | 932 | 444 |
Wikipedia | 795 | 1333 |
Yahoo! | 800 | 564 |
YouTube | 821 | 817 |
average | 1033 | 923 |
The second thing that concerned me was how small the baseline test page was, compared to the more complex pages. The main question I want to answer is “do inefficient CSS selectors slow down pages?” All five test pages contain 20,000 anchor elements (nested inside P, DIV, DIV, DIV). What changes is their CSS: baseline (no CSS), tag selector (one rule for the A tag), 20,000 class selectors, 20,000 child selectors, and finally 20,000 descendant selectors. The last three pages top out at over 3 megabytes in size. But the baseline page and tag selector page, with little or no CSS, are only 1.8 megabytes. These pages answer the question “how much faster would my page be if I eliminated all CSS?” But not many of us are going to eliminate all CSS from our pages.
I revised the test as follows:
- 2000 anchors and 2000 rules (instead of 20,000) – this actually results in ~6000 DOM elements because of all the nesting in P, DIV, DIV, DIV
- the baseline page and tag selector page have 2000 rules just like all the other pages, but these are simple class rules that don’t match any classes in the page
I ran these tests on 12 browsers. Page render time was measured with a script block at the top and bottom of the page. (I loaded the page from local disk to avoid possible impact from chunked encoding.) The results are shown in the chart below. (I don’t show Opera 9.63 – it was way too slow – but you can download all the data as csv. You can also see the test pages.)
Performance varies across browsers; strangely, two new browsers, IE 8 and Firefox 3.1, are the slowest but comparisons should not be made from one browser to another. Although all the tests for a given browser were conducted on a single PC, different browsers might have been tested on different PCs with different performance characteristics. The goal of this experiment is not to compare browser performance – it’s to see how browsers handle progressively more complex CSS selectors.
[Revision: On further inspection comparing Firefox 3.0 and 3.1, I discovered that the test PC I used for testing Firefox 3.1 and IE 8 was slower than the other test PCs used in this experiment. I subsequently re-ran those tests as well as Firefox 3.0 and IE 7 on PCs that were more consistent and updated the chart above. Even with this re-run, because of possible differences in test hardware, do not use this data to compare one browser to another.]
Not surprisingly, the more complex pages (child selectors and descendant selectors) usually perform the worst. The biggest surprise is how small the delta is from the baseline to the most complex, worst performing test page. The average slowdown across all browsers is 50 ms, and if we look at the big ones (IE 6&7, FF3), the average delta is just 20 ms. For 70% or more of today’s users, improving these CSS selectors would only make a 20 ms improvement.
Keep in mind – these test pages are close to worst case. The 2000 anchors wrapped in P, DIV, DIV, DIV result in 6000 DOM elements – that’s twice as big as the max in the top ten sites. And the complex pages have 2000 extremely inefficient rules – a typical site has around one third of their rules that are complex child or descendant selectors. Facebook, for example, with the maximum number of rules at 2882 only has 750 that are these extremely inefficient rules.
Why do the results from my tests suggest something different from what’s been said lately? One difference comes from looking at things at such a large scale. It’s okay to exaggerate test cases if the results are proportional to common use cases. But in this case, browsers behave differently when confronted with a 3 megabyte page with 60,000 elements and 20,000 rules. I especially noticed that my results were much different for IE 6&7. I wondered if there was a hockey stick in how IE handled CSS selectors. To investigate this I loaded the child selector and descendant selector pages with increasing number of anchors and rules, from 1000 to 20,000. The results, shown in the chart below, reveal that IE hits a cliff around 18,000 rules. But when IE 6&7 work on a page that is closer to reality, as in my tests, they’re actually the fastest performers.
Based on these tests I have the following hypothesis: For most web sites, the possible performance gains from optimizing CSS selectors will be small, and are not worth the costs. There are some types of CSS rules and interactions with JavaScript that can make a page noticeably slower. This is where the focus should be. So I’m starting to collect real world examples of small CSS style-related issues (offsetWidth, :hover) that put the hurt on performance. If you have some, send them my way. I’m speaking at SXSW this weekend. If you’re there, and want to discuss CSS selectors, please find me. It’s important that we’re all focusing on the performance improvements that our users will really notice.
Andreas | 10-Mar-09 at 11:45 pm | Permalink |
Thanks for another nice post Steve.
The last sentence made me start thinking, if one has performence problems with their site I doubt it’s because of the css selectors. Following your yslow rules would probably be a big step to start with for most sites.
Michael Lee | 11-Mar-09 at 5:40 am | Permalink |
Steve,
Great post!
I hope this makes it on to Ajaxian where more of the developers that need to know this are.
I read many of the CSS Selector posts in the past months and had my view shaped by those posts. It’s refreshing to see your unbiased analysis focused usefully on the end-user impact.
Great work. Please submit this to Ajaxian if you haven’t already. I’m interested to see if there are any rebuttals to your hypothesis.
Tim Kadlec | 11-Mar-09 at 6:31 am | Permalink |
Fantastic analysis Steve!
I’m both pleased and impressed to see you analyze the tests in more detail to be able to get a better handle on exactly what performance gains and losses are incurred.
Too often people (me included) just take these stats at face value and don’t take the time to see how they have been created, and what variables may be affecting them in ways we didn’t expect.
Looking forward to your talk at SXSW!
Vladimir Carrer | 11-Mar-09 at 7:22 am | Permalink |
Wow! I was searching post like this! I wanted to micro optimize my CSS Frameworks. Thanks for the hint!
Sam Foster | 11-Mar-09 at 7:49 am | Permalink |
Very interesting read. We’ve been doing similar analysis for a client recently, and our results very much agreed with what you describe here. The rumour that you can make a significant performance improvement by avoiding deep descendant selectors seems exaggerated at best – when you’ve reached the point where that’s your most pressing performance problem, you should probably take the time off for some R&R.
We found that what style you apply (the declaration block) was more significant than the selectors used.
We also saw that IE7 and IE8 were each slower than IE6 at applying CSS. Firefox (2,3) too had its hockey stick – with performance degrading steeply as the number of nodes in the DOM grew.
Webkit and Safari’s performance characteristics were sufficiently different that at first I questioned my test method. They are very level curves.
There was more, I’ll try and get a blog post up to describe it. Its great to see the interest in this work though – CSS optimization has been a “compress it” one-liner for too long.
Boris | 11-Mar-09 at 9:20 am | Permalink |
The real concern with the CSS3 selectors that hyatt mentions is not pageload time. Browsers optimize pageload heavily; Gecko has various optimizations that make appending to end of document fast even when various “slow” selectors are present. It’s various DOM mutation that suffers from such selectors the most, and your tests don’t test that.
Now there _are_ selectors that are hard to optimize and can affect pageload: the :last-child, :nth-last-of-type(), etc selectors. Those pretty much require some extra work to be done on appends. But the amount of work scales purely with document size, at least in Gecko, not with number of selectors.
More precisely, there are selectors that are slow because they require style to be recomputed for a large chunk of the document; for these selectors all that matters is whether one is present, not how many of them there are. Then there are selectors that are slow because matching them is slow. An example might be:
[*|foo] [*|foo] [*|foo]
especially if some of the nodes in your document actually have an attribute named “foo” and if they tend to have lots of attributes. For these selectors what matters is how many of them affect the parts of the document that are being restyled. What part that is depends on the slow selectors as described above, of course.
Now the tests here don’t really use either of these two kinds of selectors; they only use selectors that are very easy to make very fast and that existing browsers optimize carefully. Hence the results: doesn’t much matter what kind of fast selectors you use…
M. David Green | 11-Mar-09 at 9:54 am | Permalink |
Fascinating read. Where I work, we’ve been debating the impact of CSS design on performance for our web applications. In the real world, it always felt to me that inefficient CSS is more of a strain on development resources and maintenance than it is on client-side performance.
I’m confident Firefox 3.1 (and even IE8) will catch up, to the extend that it matters. And one of these years, when we can rely on support for CSS3 selectors in the browsers used by the majority of our audience, it’s nice to see a little tangible evidence that we will be able to take advantage of the new options comfortably.
Douglas Clifton | 11-Mar-09 at 1:44 pm | Permalink |
Based purely on instinct I tend to use selectors that are as minimal as possible w/o causing unwanted side effects. The most obvious example of this are ids, since they must be unique per document, followed by classes and descendant selectors.
In my mind, there is line between overuse of ids and classes and relying on parent and sibling relationships. You certainly wouldn’t want to put an id on every element, that would be silly.
These same observations and practices hold true (for me at least) when using a JavaScript library like jQuery to locate a DOM node using the same (or enhanced) syntax.
Nice work as usual Steve. I hope you enjoy Austin and SXSW this weekend.
kL | 11-Mar-09 at 3:29 pm | Permalink |
Your test may be flawed because of too short time span.
“Testing JavaScript performance on Windows XP (Update: and Vista) is a crapshoot, at best.”
http://ejohn.org/blog/accuracy-of-javascript-time/
Also, you’re not checking if browser actually applied the rules!
You *must* read and verify correctness of some CSS-dependent property (like offsetHeight), because browser may have asynchronous/multithreaded rendering with JS running before CSS rules are applied!
Isaac Z. Schlueter | 11-Mar-09 at 3:30 pm | Permalink |
> strangely, two new browsers, IE 8 and Firefox 3.1, are the slowest.
That’s not all that strange, actually. Firefox 3.1 and IE 8 are both still in a very beta/pre-release state. (IE8 is on RC1, I believe, so it’s a bit further along.)
If you’re Doing It Right, then your beta *should* perform less quickly than your release version, because optimizations should always be done as late as possible. The exception, of course, is if the focus of the new version is to optimize for speed.
Boris | 11-Mar-09 at 8:02 pm | Permalink |
Steve, Firefox 3.1 doesn’t look like “one of the slowest” on that fixed graph…
Nicole Sullivan | 11-Mar-09 at 10:09 pm | Permalink |
Hi Steve,
Micro-optimization of selectors is going a bit off track in a performance analysis of CSS. The focus should be on scalable CSS.
There are two ways to measure 0(n) in CSS. What happens as you add more pages and modules to the site? Then you measure both file size, and HTTP requests. The way you write selectors has a huge impact on both of these that is much more significant than the time it takes the browser to process a reasonable number of selectors (even deeply nested).
Check out the miniscule file size of template and grids on the object oriented CSS open source github project. They offer a significant performance improvement, and they are dead-simple to extend in a performant manner.
http://wiki.github.com/stubbornella/oocss
See you at SXSW!
Cheers,
Nicole
Steve Souders | 11-Mar-09 at 10:30 pm | Permalink |
@kL: The 15 ms granularity of JavaScript timers is only an issue if the deltas are in the 15 ms range, which means we’ve already proven my point – the delta of using complex CSS selectors is small. Also, the style (blue background) appears before the time value is written, so the styling is included.
@Boris: You caught me in the middle of a correction. The original stats for Firefox 3.1b2 and IE 8 were on a slower test PC and so comparing them to other browsers was a mistake. I WAS WRONG TO COMPARE THE DATA ACROSS BROWSERS. I got distracted by the pretty chart – the main point of this blog post is to compare the baseline page to the child and descendant pages. I subsequently re-ran those tests, updated the chart, and updated the text. And the data still supports my main conclusion: CSS selectors don’t have much effect. Your first comment has great info – it sounds like you agree that CSS selectors have little effect on page loading, but more on how CSS and JS interact during interaction with the page (or as you say “DOM mutation”). Do you know – can these types of child and descendant selectors affect performance during “DOM mutation”, or is it just the more complex ones you mention? It would be great if you had examples of how child, sibling, and descendant selectors affect performance during DOM mutation.
Boris | 11-Mar-09 at 11:12 pm | Permalink |
Offhand, I doubt I can create reasonably simple testcases in which descendant and sibling selectors have much of an effect unless you have a whole bunch of them and lots and lots of DOM nodes. And not just in a linear list, but deep nesting, etc… Even then, browsers try really hard to optimize this stuff. Any page that does show a problem in that sort of situation would likely need to have tens of thousands of nodes.
Steve Souders | 11-Mar-09 at 11:26 pm | Permalink |
@Boris: Then is this doc out-of-date (https://developer.mozilla.org/en/Writing_Efficient_CSS)? Here David Hyatt recommends avoiding sibling, child, and descendant selectors for performance reasons. I see this doc referenced frequently wrt CSS selector performance.
Daniel Glazman | 11-Mar-09 at 11:42 pm | Permalink |
Nothing new here. The following page – by Hyatt too – is dated from 2000 : https://developer.mozilla.org/en/Writing_Efficient_CSS
and we have always known that potential perf hit in the CSS WG ; it’s just not a reason to stop progress on the selectors’ side.
Daniel Glazman (W3C CSS WG Co-chair)
Andrew Bate | 12-Mar-09 at 1:14 am | Permalink |
I’ve been chasing an IE7 performance issue around for a while and the one passing reference you made to ‘hover’ pseudo-selectors seems to be the root of my problem.
We’re doing an intensive rich-client app with many, many drag-drop operations going on. IE7 performance has been shocking, and unusable beyond a certain point. When I disabled the few CSS hover selectors that was active the application improved dramatically.
Thanks for another great post!
Daniel | 12-Mar-09 at 4:24 am | Permalink |
Maybe I missed the point, but the last part of your selectors all have unique class names – so in most cases the check is eliminated by the class name (remembering that css rules are evaluated backwards). So I don’t think they’re very good tests of descendant selectors.
Maybe something like ‘p.class0001 a’ would be a better test? For example:
http://www.calamity.org.uk/x/child.html
http://www.calamity.org.uk/x/descendant.html
The descendant examples should in theory be slower, because they’ll have to continue checking up the tree while the child ones only have to check the immediate parent.
sunnybear | 12-Mar-09 at 7:47 am | Permalink |
Hi, Steve. I’ve investigated this problem about a year ago, but didn’t publish any articles in English. So there are some in Russian (graphics inside):
http://webo.in/articles/habrahabr/19-css-efficiency-tests/
http://webo.in/articles/habrahabr/25-css-efficiency-tests-2/
http://webo.in/articles/habrahabr/38-css-efficiency-tests-3/
http://webo.in/articles/habrahabr/53-semantic-dom-tree/
Also there is a number of CSS performance tests here
http://webo.in/tests/
CSS performance optimization for common web pages can gain about 10-20ms per page (up to 50ms) — but this is very small for usual website. Only large web applications can be improved this way. But they usually suffer from another problem — memory leaks in IE (for large DOM tree :).
Guy | 14-Mar-09 at 3:21 pm | Permalink |
>> For 70% or more of today’s users, improving these CSS selectors would only make a 20 ms improvement.
I don’t see how you can make a blanket statement like that without considering the PC hardware. 20 ms on your PC is not 20 ms any ANY PC. You didn’t even mention what hardware you were testing on. Maybe it is faster than 99% of users PCs.
For a true analysis of the real world impact on users, you can’t just ignore the hardware, which these days could be anything from an iPhone to a 3GHz+ quad core.
Maybe the difference in some cases is an order of magnitude more…
Jon Zuck | 22-Mar-09 at 7:18 am | Permalink |
Interesting and heartening post. I, for one, generally prefer using descendent selectors intelligently over compulsively ID-ing every single element in the HTML.
That said, I tend to be a specific as possible, and try to avoid declaring any style I don’t need. This is especially critical in resets. Even in Eric Meyer’s resets, box properties are stripped from inline elements that don’t even HAVE box properties. And for the old-school
* {margin:0; padding:0;}
reset, on several occasions I’ve seen it actually crash IE6!
Hardy | 07-Apr-09 at 6:34 am | Permalink |
That’s some pretty nice stuff for every CSS developer. Thank you!
giochi gratis | 07-Apr-09 at 8:49 am | Permalink |
WOW! Thanks, this post is very helpfully!
mike1965 | 12-May-09 at 5:53 am | Permalink |
i checked chrome2.0 browser. its not fast its slowly, you should first try to just style tags then simple class selectors (.class). Descendant selectors (div p) are slower than simple class selectors, and child selectors are slower still.
tomsheeder | 18-May-09 at 2:13 am | Permalink |
we optimize our micro css stylesheets for different browsers. this post is very helpfully for us thanks so lot
namensschilder | 22-May-09 at 1:09 am | Permalink |
we optimized our website for ie8.0 but this performance is not perfect. we optimized for firefox in the future
tabuforyou | 01-Jun-09 at 9:28 am | Permalink |
why ie8 is so much slowlier than ie7 using css selctors?
Steve Souders | 01-Jun-09 at 11:45 am | Permalink |
@tabuforyou: In the chart, IE7 and IE8 are almost identical. Also, the hardware wasn’t controlled for different browsers. As mentioned in the article, this data should not be used to compare one browser to another – instead, it’s for comparing one type of selector to another (for a given browser). A more controlled experiment is needed for comparing selector performance across browsers. That’s coming soon (1-2 months).
Module23 | 09-Jun-09 at 1:51 am | Permalink |
Perfect analysis. Thanks for collecting and sharing. But why is ie8’s performance so bad? Are there any reasons?
Krün | 13-Jun-09 at 9:47 am | Permalink |
the IE 8 is the fastest browser. please look at the statistic.
testkauf | 16-Jun-09 at 5:24 am | Permalink |
i think firefox optimization is still the best
ytzong | 18-Jun-09 at 10:49 pm | Permalink |
翻译为ä¸æ–‡é¸Ÿ(I translated this post to Chinese):
http://www.99css.com/2009/06/performance-impact-of-css-selectors.html
Preseo | 01-Jul-09 at 7:56 am | Permalink |
I´m not so sure if IE8 is after the release of Firefox 3.5 still the fastest browser… And even if so: FF is much better concerning usage, add-ons, customization (i.e. skins) etc.
Hemnath | 02-Jul-09 at 2:11 am | Permalink |
Good post. If you go with browser compatiblity issues in detail, it would be more helpful for us. Keep on your great work and help others. Thanx.
mynthon | 24-Jul-09 at 2:04 am | Permalink |
well – i thin entire css was invented to avoid slyling like: .footer-contact, .footer-copyright, .footer-copyright-date and use
#footer .contact; #footer .copy; #footer .copy span. The whole thing about inefficient css rules (i think) started with google “page speed” extension.
People often forget that page rendering depends not only on browser speed but also connection speed, download size, number of requests.
Well – in my cv i can use optimized rules but it is too small to see difference. I can use it on my corp site but it is too big to add thousands of classes – i will gain 30ms on rendering css but will loose 2 seconds on css downloading. Maintenance is also cost and with thousands of classes it will be very expensive.
Kinderbücher | 03-Sep-09 at 3:58 am | Permalink |
I must say, that is a really good analysis.
I’m impressed to see you analyze the tests to get a better handle on exactly what performance gains and losses are incurred.
Frederick Townes | 13-Sep-09 at 6:14 am | Permalink |
Thanks as always Steve, this post definitely saved me some time in avoiding diminishing returns on a few projects.
Tim | 08-Oct-09 at 10:50 am | Permalink |
Absolutely wonderful. Thanks for posting this. I’m glad to see this kind of data after reading a bit of harassment of certain CSS3 usage.
Viktor Persson | 29-Nov-09 at 7:27 am | Permalink |
Hi. I am currently reading your book “Even Faster Web sites” and have come to a situation where I’m not quite sure what’s the best solution for optimal performance (page loading time).
Say that you have a tag cloud with 200 links. Would you prefer using the rule #tag-cloud > a OR .tag-cloud-link for applying properties to these links?
Adding the extra link class results in 4600 extra charcters in the HTML output.
Steve Souders | 29-Nov-09 at 11:32 am | Permalink |
@Viktor: It depends on the styling. If the desired styles are inherited, I would do it in #tag-cloud. If the styling is not inherited and the page doesn’t has only a few hundred A tags, I would do #tag-cloud > A. If the page has thousands of A tags, I would do the class.
Viktor Persson | 29-Nov-09 at 1:42 pm | Permalink |
Many thanks for your quick answer. The site won’t have more than 100 links in total so I guess I’ll go for the #tag-cloud > a.
Drakensang | 15-Jan-10 at 2:35 am | Permalink |
Doing an intensive rich-client app with drag-drop operations going on. IE7 performance wasn’t great, and unusable. When I disabled a few CSS hover selectors that was active the application improved dramatically.
Zach | 16-Feb-10 at 1:06 am | Permalink |
Great article. Had been wondering about this for quite awhile.
Dave Artz | 22-Feb-10 at 7:59 am | Permalink |
Steve, could you publish and link to your test page scripts? I’d like to run these benchmarks on our reference machine (3GHz P4 512MB).
What machine config did you run these tests on?
Steve Souders | 22-Feb-10 at 9:38 am | Permalink |
@Dave: Hi! There’s a link to the “test pages” above, but here it is again: https://stevesouders.com/tests/css-selectors/index.html
I ran these on my ThinkPad x61 on Windows XP. I don’t have that laptop any more, so don’t have memory, CPU, etc. info for you.
stoimen | 10-Mar-10 at 1:45 am | Permalink |
Whatever the gain size is, isn’t it a good practice to optimize whatever you can.
OK, it cannot be the most important part of a web site optimization process, but it doesn’t slows down the page, isn’t it!
Buckthorn | 30-Mar-10 at 2:38 am | Permalink |
Thanks for the article. The book may still be out on measuring the exact performance effects of css optimization. But what I take away from things like Speed Test and Hyatt’s comments is that for many of us, there is probably plenty of room for optimization. Assigning IDs and classes is obviously ridiculous. But at the same time, we would do well to avoid being lazy and leaning too heavily on things like “multi-multi-level” contextual and descendant selectors (e.g., “ul li p span a”), and redundant, “overqualified” IDs and classes (e.g., “body#home”, “div.column”).
Willabee | 31-Oct-10 at 8:30 am | Permalink |
From information given here and elsewhere, the optimisation seems to point towards adding class names to quickly target subject nodes in order to gain a few ms of performance.
Considering:
* Increased (maybe drastically) page weight.
* The extra mark-up render time.
* Increased style-sheet weight.
* The CSS will cache for most users.
* The extra maintenance effort to update the mark-up with these class names.
* The extra maintenance effort to make modifications to the presentation layer.
For just a few ms!!!
We’ve got to be kidding.
Its almost like going back to presentational tag soup.
CSS inheritance is the nearest thing to event delegation. This, along with over-rides (more specific) should be the focus for performance across presentation and behaviour layers IMO.
Steve Souders | 31-Oct-10 at 9:58 am | Permalink |
@willabee: please read the conclusion: “For most web sites, the possible performance gains from optimizing CSS selectors will be small, and are not worth the costs. There are some types of CSS rules and interactions with JavaScript that can make a page noticeably slower. This is where the focus should be.”
Yaron Shapira | 27-Dec-10 at 12:17 am | Permalink |
Great article and discussion. I like using long selector chains like #kwrap #kheader ul li a because it lets me see the html structure as I work on the CSS. I also sometimes do stuff like …span#notices for the same reason. I realize now that these are (possibly terribly) inefficient selectors but has anyone seen any tests on what impact these might actually have in real world situation? (say >500 dom nodes, >500 selectors and >1000 total rules)
Michael | 27-Jan-11 at 12:22 pm | Permalink |
According to the full comment from David Hyatt regarding performance speed is likely less of the issue than memory utilization.
Nowadays browsers can consume 150mb per tab or window. While the speed maybe saved by the use of more memory to handle a page the performance impact is there.
As users multitask on their laptops/desktops it must be considered that multiple apps are running. A web page causing an increase in 150mb or more in memory usage could make the computer seem slow and as a result the user starts closing tabs/windows and … your page.
andrej | 17-Apr-11 at 9:38 am | Permalink |
Very interesting post.
Is there any further inquiery or research been done into css performance?
I have been reading up on object oriented css lately – and the most important take-away from oocss is that already specified rules are performance freebies. Is there any correlation between the your finding and oocss?
Hadi Farnoud | 22-Jun-12 at 10:14 am | Permalink |
wait, you’re saying IE8 render engine is faster than anyone else?