Comments on: Cache is King Essential knowledge for making your web pages faster. Tue, 21 Oct 2014 11:41:32 +0000 hourly 1 By: Steve Souders Sat, 20 Oct 2012 23:37:59 +0000 @dguimbellot: I’ll be announcing a new experiment to measure average time-in-cache. Stay tuned.

@Lennie: I don’t know whether mobile devices stored cached items compressed or uncompressed.

By: Lennie Sat, 20 Oct 2012 19:55:28 +0000 Steve,

Did you or anyone else test if the small caches on mobile store the files compressed ?

Or is it as bad as on the desktop where Safari for example does not cache compressed ?

I have a hard time finding numbers on that.

I did see numbers on your site and others on how small (and non-persistent) the caches on mobile browsers are and which browsers on the desktop store the data compressed.

But not the combination.

By: dguimbellot Fri, 19 Oct 2012 19:25:17 +0000 cached items are also stored on the edge as well as local proxies. thus the risk of 1 day cache misses having to go to the max latency server is also lower.

is there some way with ALexa or view across shared google analytics to analyze the age time vs. actual fetch to calculate a best case setting for the user community at large?

By: Steve Souders Thu, 18 Oct 2012 18:34:46 +0000 @Nick: It’s fine to use Expires instead of or in addition to max-age. Two advantages of max-age are it’s an absolute # of seconds and thus is not subject to clock skew issues between the client and server, and max-age is part of the Cache-Control header which includes other values which means you can do with one less response header.

By: Nick Retallack Wed, 17 Oct 2012 23:01:10 +0000 In your metrics, you never mentioned the Expires header. Doesn’t it serve the same purpose as max-age?

By: Steve Souders Mon, 15 Oct 2012 23:05:15 +0000 @Laurens: That’s a good point. I don’t know what the impact would be. Next time I’ll using Chrome.

By: Laurens Mon, 15 Oct 2012 21:18:18 +0000 Nice work Steve. There is a flaw in the test though. IE9 does not block the (on)load event if external javascripts are dynamically inserted to the DOM. Not even when an advertisement uses document.write, so yes the impact of no JS seems lower than you would expect. I think the result will be different if you would use Chrome.

In my opinion measuring untill onload within IE does not shape a realistic picture of the real page performance. The stuff that happens after the (on)load (but still belongs to the initial payload) is a blind spot. Would be great if navigation timing could go beyond (on)load.

By: Ali Sat, 13 Oct 2012 12:16:29 +0000 Great work. With regard to AJAX advice, I also personally prefer SPA-style applications over server-rendered resources.

However, there is a school of thought gaining momentum recently – especially in the REST community – that believes SPA-style leads to bad design and slow user experience – main page loads and then it has to start loading all the resources. Twitter’s move for scrapping its SPA has been taken as a proof in the industry.

One of the main problems I have with server-rendered views is that they are composed resources as such very bad for caching.

Watch this space for more controversy!

By: joe vallender Fri, 12 Oct 2012 15:14:41 +0000 totally agree about having at least two aggregated files, kyle.

I’ve found this to be a real necessity for those projects that seem to be perpetually ‘in development’ or with frequent releases

By: Kyle Simpson Fri, 12 Oct 2012 03:29:14 +0000 Excellent post and analysis, Steve.

I think this further reinforces one of the points I’ve made for years about the value of dynamic script loading where you should first concat all your JS into a single file (down from 10-20+ files on average), but THEN split that into 2-3 chunks, and load them dynamically.

I usually point out the value of parallel loading in that scenario, which has been proven many times over (despite the cargo cult “load a single file only” mentality that prevails). But there’s another, and as you point out, MORE important benefit:

** different caching headers **

If you are combining all your JS into a single concat file, you are undoubtedly combining some code which is extremely stable (like a jQuery release which will never ever change) and some (maybe a lot) which is quite volatile (like your UX code you tweak frequently). If you only serve one file, you have to pick one caching length for all that code.

If you split your JS into 2 chunks (the volatile chunk and the stable chunk), you can serve each with different caching length headers. When your UX code updates frequently, over time, repeat users on your site will not have to keep downloading all that stable code over and over again.

I have done these things in production on large sites and I can attest time and again it helps. It helps big time. And it’s nice to see your real numbers research backing that up.