Comments for High Performance Web Sites http://www.stevesouders.com/blog Essential knowledge for making your web pages faster. Fri, 26 Sep 2014 19:34:09 +0000 hourly 1 http://wordpress.org/?v=3.7.4 Comment on Onload in Onload by Steve Souders http://www.stevesouders.com/blog/2014/09/12/onload-in-onload/#comment-61053 Fri, 26 Sep 2014 19:34:09 +0000 http://www.stevesouders.com/blog/?p=4157#comment-61053 It’s more overhead to have a timer looping like that.

]]>
Comment on Onload in Onload by Allen Lee http://www.stevesouders.com/blog/2014/09/12/onload-in-onload/#comment-61052 Fri, 26 Sep 2014 19:01:01 +0000 http://www.stevesouders.com/blog/?p=4157#comment-61052 Great post.

So why not use document.readyState directly and skip onload and addEventListener entirely? IE

var f = function() { …. };

function fAfterLoad() {
if (document.readyState == “complete”) f();
else window.setTimeout(fAfterLoad, 250)
};

fAfterLoad()

The use case is a third party snippet for analytics that should be transparent to the user, i.e. google analytics. This has the advantage of being compatible with all browsers without special hacks for IE8 and not over writing window.onload, which could be used by the page.

]]>
Comment on Onload in Onload by Steve Souders http://www.stevesouders.com/blog/2014/09/12/onload-in-onload/#comment-61051 Wed, 17 Sep 2014 15:45:09 +0000 http://www.stevesouders.com/blog/?p=4157#comment-61051 Philip: I tested Boomerang while writing this post and was pleased to see it used readyState to avoid this problem. You’re always one step ahead.

]]>
Comment on Onload in Onload by Philip Tellis (@bluesmoon) http://www.stevesouders.com/blog/2014/09/12/onload-in-onload/#comment-61050 Wed, 17 Sep 2014 13:48:48 +0000 http://www.stevesouders.com/blog/?p=4157#comment-61050 Hi Steve,

The addEventListener spec states that if an event handler is added via code that runs as part of that event, then the new handler will only be called on subsequent invocations of the event. Since onload can only fire once, this means that onload handlers added in onload will never be called (assuming browsers follow the spec for this).

I noticed this when we started loading boomerang using the iframe loader technique, since there was now a possibility that it could load after onload (and well, some of our customers load boomerang in the onload event). As a result, boomerang has been using document.readyState for a while now to check if onload has already fired when it loads up.

]]>
Comment on Resource Timing practical tips by Brian Nørremark http://www.stevesouders.com/blog/2014/08/21/resource-timing-practical-tips/#comment-61048 Sat, 30 Aug 2014 17:10:10 +0000 http://www.stevesouders.com/blog/?p=4099#comment-61048 Steve: Any beaconing packages you can recommend? I don’t think Boomerang splits up into multiple beacons.

An idea for a more robust data could be to capture aggregated data, like the ones that HTTPArchive tracks. response sizes is not possible, but you could easily aggregate: # JS/CSS/GIF/PNG/JPG/HTML/Fonts/Flash/Other requests, # HTTPS requests, # iframes, and probaly a few more.
The more I think about it, the more I like the idea. Correlations makes it all worth it, even though SPDY may change the picture quite a lot.

]]>
Comment on Resource Timing practical tips by Steve Souders http://www.stevesouders.com/blog/2014/08/21/resource-timing-practical-tips/#comment-61047 Fri, 29 Aug 2014 16:50:21 +0000 http://www.stevesouders.com/blog/?p=4099#comment-61047 Brian: Some beaconing packages track the querystring length and break it across multiple beacons when necessary. Doing a POST is also possible. I agree – sending back Resource Timing data for every resource in the page is likely to exceed URL length. FYI – Eric Lawrence just blogged about this in URL Length Limits.

]]>
Comment on HTTP Archive – new stuff! by Steve Souders http://www.stevesouders.com/blog/2014/06/08/http-archive-new-stuff/#comment-61046 Fri, 29 Aug 2014 16:42:35 +0000 http://www.stevesouders.com/blog/?p=4021#comment-61046 Saurabh: All the code exists to do what you want, but I admit that it has minimal documentation. Let me answer your questions and perhaps that will generate a first draft of code for setting up a private instance of HTTP Archive.

The code for doing the crawl is in the bulktest subdirectory. I use crontab to schedule when the crawl happens. Look at crontab.txt. I schedule crawls for the 1st and 15th of each month. I execute batch_log.php every 30 minutes to move the crawl along.

Wrt the set of URLs, that’s controlled by the commandline options to batch_start.php. I specify no options which means it uses the Alexa Top 1M list of URLs. Instead, you could pass in a file as the first parameter:
batch_start.php urls.txt
where urls.txt contains one URL per line.

The key place for customization is settings.inc where you configure MySQL and the URL to your private instance of WebPagetest. Further WebPagetest config options are specified in bootstrap.inc. You might need to create bulktest/wptkey.inc.php – that’s not in Github as it’s optional depending on your WebPagetest configuration.

]]>
Comment on HTTP Archive – new stuff! by Saurabh http://www.stevesouders.com/blog/2014/06/08/http-archive-new-stuff/#comment-61045 Thu, 28 Aug 2014 19:30:48 +0000 http://www.stevesouders.com/blog/?p=4021#comment-61045 Hey.. Thanks for this amazing tool, I want to run a my own copy of the HttpArchive crawler with my own mysql instance, I was able to set it up locally. I need to crawl over my own set of URLs, is there a way to configure URLs that it crawls and customize when crawling happens?
I am new to php so tried but could not find such code in the repository. Thanks in advance.

]]>
Comment on Resource Timing practical tips by Brian Nørremark http://www.stevesouders.com/blog/2014/08/21/resource-timing-practical-tips/#comment-61044 Thu, 28 Aug 2014 17:02:15 +0000 http://www.stevesouders.com/blog/?p=4099#comment-61044 Great that Boomerang got a Ressource Timing plugin. Boomerang does HTTP/GET only, so how do we did around the 2K character limit in browsers? POSTing it feels wrong.

I haven’t cross referenced support for Ressource Timing with max URL length, it could be that this is simply not a problem?

]]>
Comment on Resource Timing practical tips by Andy Davies http://www.stevesouders.com/blog/2014/08/21/resource-timing-practical-tips/#comment-61041 Tue, 26 Aug 2014 11:14:06 +0000 http://www.stevesouders.com/blog/?p=4099#comment-61041 As far as I know the other types that could be used in getEntriesByType are ‘mark’, ‘measure’, and ‘navigation’

‘mark’ and ‘measure’ are from User Timing (http://www.w3.org/TR/user-timing/) and ‘navigation’ is from the Navigation Timing 2 spec (https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming2/Overview.html) – though I don’t know of any browsers that implement the draft spec yet.

If you want to see ‘mark’ and ‘measure’ in action you can fire up Dev Tools on one of WebPagetest’s waterfall pages and type window.performance.getEntriesByType(‘mark’); in the console.

window.performance.measure(“elapsed”, “aft.Site Header”, “aft.Waterfall”); can be used to generate a measure and then window.performance.getEntriesByType(‘measure’); to retrieve the list of measures.

I agree with the recommendation to use performance.getEntriesByType(“resource”) to get the resource timing entries as it makes the code more resilient in the future, other option would be to check the array elements to ensure they have an entryType of ‘resource’ but that needs to be done everywhere!

]]>