Tuesday, February 4, 2014

Android web performance testing takes a big leap forward

We've been working to bring better support for measuring web performance on mobile for a while.  Michael Klepikov started out by building out a new cross-platform test agent for WebPagetest that runs on Node.js, can run WebDriver/Selenium scripts and can talk to the Dev Tools interface for Chrome.  Todd Wright  extended that support to talk to mobile Chrome on android and even Safari on iOS using a Dev Tools proxy that he created.  Browser support has been really good for a while and we could get great request data and full timelines but video has always been the blocker for being able to launch.  When Android 4.4 launched with the ability to record 60FPS video on-device with very low overhead it solved the last issue that was holding us back from launching.

WebPageTest now supports Chrome stable and Beta on Android 4.4

For private instances the code is all in github and once it has had a couple of weeks of public use and shaking through any issues I'll cut an official release.  If you want to try it out before then you'll need both the web and agent code to support the new video capture capabilities (agent setup instructions are here).

Live on the public instance are a collection of devices in the Dulles location for testing:



There are:

  • 5 Motorola G's
  • 2 Nexus 5's
  • 1 Nexus 7 in Portrait Mode
  • 1 Nexus 7 in Landscape Mode
To select the devices, just select the Dulles location from the location list and they will show up in the list of browsers.

All of the devices are also available through the API for automated testing with the location ID's available here.

For now all of the devices are using a fixed 3G connection profile but hopefully soon they will have support for arbitrary connection profiles as well.

The video capture on the mobile devices is significantly better than what we have on Desktop and I highly encourage you to try it out.  Most of the sites I have tried out take a surprisingly long time to display anything (one second is a good, aggressive target to shoot for).  Since the mobile devices support much faster capture than desktop, the filmstrip view in WebPageTest has a new 60FPS option for displaying every frame and being able to see EXACTLY when something was displayed.


The increased resolution really helps when aligning the video with what is happening in the waterfall.

We also get full dev tools timeline views of what is going on which is particularly important on mobile given the slower processing (timelines are captured automatically when video is enabled or optionally in the "Chrome" tab of the advanced settings otherwise).


If you're really adventurous you can also submit WebDriver/Selenium scripts for testing (though it hasn't had a lot of exercise so there may be issues).

Most of the test features that you are used to using on desktop still don't work but over the next few weeks we should be able to fill some of them in as well as add some more mobile-specific capabilities:
  • Packet Captures (tcpdump)
  • Arbitrary connection profiles
  • Testing with Chrome's Data Reduction Proxy enabled
  • Arbitrary Chrome command-line switches (will allow for DNS rewriting and cert ignoring)
  • Test sharding so individual tests can run in parallel across devices and complete faster
  • Storing of response bodies
  • Javascript disabling
  • SPOF testing
  • Basic WPT scripting support (logData, navigate and exec commands initially)
Take the devices for a spin and let us know if you see any issues.  If you don't see the devices online it's possible that the agent threw an exception that we didn't handle and I should be able to bring them back online pretty quickly (ping me if it looks like they've been offline for a while).

Friday, July 12, 2013

Measuring performance of the user experience

TL;DR: WebPagetest will now expose any User Timing marks that a page records so you can use the same custom events for your synthetic test measurement as well as your Real User Measurement (and you can use WebPagetest to validate your RUM measurement points).

Before kicking off an optimization effort it is important to have good measurements in place.  If you haven't already read Steve Souders' blog post on Moving beyond window.onload(), stop now, go read it and come back.

The Page Load time (start of navigation to the onload event) is the cornerstone metric for most web performance measurement and it is a fundamentally broken measurement that can end up doing even more harm than good by getting developers to focus on the wrong thing.  Take two examples of static pages from WebPagetest for example:

The first is the main test results page that you see after running a test.  Fundamentally it consists of the data table and several thumbnail images (waterfalls and screen shots).  There are a bunch of other things that make up the page but they aren't the critical parts of the page for the user.  Specifically, Ads, social buttons (twitter and g+), the partner logos at the bottom of the page, etc.

Here is what it looks like when it loads:

The parts of the page that the user (and I) care about have completely finished loading in 500ms but the reported page load time is 3 seconds.  If I was going to optimize for the page load time I would probably remove the ads, the social widgets, the partner logos and the analytics.  The reported onload time would be better but the actual performance for the user experience would not change at all so it would be completely throw-away work (not to mention detrimental to the site itself).

The second is the domains breakdown page which uses the Google visualization libraries to draw pie charts of the bytes and requests by serving domain:

In this case the pie charts actually load after the onload event and measuring the page load time is really just measuring a blank white page.

If you were to compare the load times of both pages using the traditional metrics they would appear to perform about the same but the page with the pie charts has a significantly worse user experience.

This isn't really new information, the work I have been doing on the Speed Index has largely been about providing a neutral way to measure the actual experience and to do it consistently across sites.  However, if you own the site you are measuring, you can do a LOT better since you know the parts of the page 

Instrumenting your pages

There are a bunch of Real User Measurement libraries and services available (Google Analytics, SOASTA mPulse, Torbit Insight, Boomerang, Episodes) and most monitoring services also have real-user beacons available as part of their offerings.  Out of the box they will usually record the onload time but they usually also have options for custom measurements.  Unfortunately they all have their own APIs right now but there is a W3C standard that the performance group nailed down last year for User Timing.  It is a very simple API that lets you record point-in-time measurements or events and provides a way to query and clear the list of events.  Hopefully everyone will move to leveraging the user timing interfaces and provide a standard way for marking "interesting" events but it's easy enough to build a bridge that takes the user timing events and reports them to whatever you are using for your Real User Measurement (RUM). 

As part of working on this for WebPagetest itself I threw together a shim that takes the user timing events and reports them as custom events to Google Analytics and SOASTA's mPulse or Boomerang.  If you throw it at the end of your page or load it asynchronously, it will report aggregated user timing events automatically.  The "aggregated" part is key because when you are instrumenting a page you can identify when individual elements load but what you really care about is when they have ALL loaded (or all of a particular class of events have happened).  The snippet will report the time of the last event that fired and it will also take any period-separated names (group.event) and report the last time for each group.  In the case of WebPagetest's result page I have "aft.Header Finished", "aft.First Waterfall" and "aft.Screen Shot" (aft being short for above-the-fold".  The library will record an aggregate "aft" time that is the point when everything that I consider critical as above-the-fold has loaded.

The results paint a VERY different view of performance than you get from just looking at the onload time and match the filmstrip much better.  Here is what the performance of all visitors from the US to the test results page looks like in mPulse.

Page Load (onload):

aft (above-the-fold):

That's a pretty radical difference, particularly in the long-tail.  A 13 second 98th percentile is something that I might have freaked out about but 4 seconds is quite a bit more reasonable and actually better represents the user experience.

One of the cool things about the user timing spec is that the interface is REALLY easy to polyfill so you can use it across all browsers.  I threw together a quick polyfill (feel free to improve on it - it's really basic) as well as a wrapper that makes it easier to do the actual instrumentation.  

Instrumenting your page with the helper is basically just a matter of throwing calls to markUserTime() at points of interest on the page.  You can do it with inline script for text blocks:





or more interestingly, as onload handlers for images to record when they loaded:


If you can get away with just using image onload handlers that would be the safest bet because inline scripts can have unintended blocking events where the browser has to wait for previous css files to load and process before executing.  It's probably not an issue for an inline script block well into the body of a bage but something to be aware of.

Bringing some RUM to synthetic testing

Now that you have gone and instrumented your page so that you have good, actionable metrics from your users, it would be great if you could get the same data from your synthetic testing.  The latest WebPagetest release will extract the user timing marks from pages being tested and expose them as additional metrics:

At a top-level, there is a new "User Time" metric that reports the latest of all of the user timing marks on the page (this example is from the breakdown pie chart page above where the pie chart shows up just after 3 seconds and after the load event).  All of the individual marks are also exposed and they are drawn on the waterfall as vertical purple lines.  If you hover over the marker at the top of the lines you can also see details about the mark.

The times are also exposed in the XML and JSON interfaces so you can extract them as part of automated testing (the XML version has the event names normalized):

This works as both a great way to expose custom metrics for your synthetic testing as well as for debugging your RUM measurements to make sure your instrumentation is working as expected (comparing the marks with the filmstrip for example).

Tuesday, June 4, 2013

Progressive JPEGs FTW!

TL;DR: Progressive JPEGs are one of the easiest improvements you can make to the user experience and the penetration is a shockingly-low 7%.  WebPagetest now warns you for any JPEGs that are not progressive and provides some tools to get a lot more visibility into the image bytes you are serving.

I was a bit surprised when Ann Robson measured the penetration of progressive JPEGs at 7% in her 2012 Performance Calendar article.  Instead of a 1,000 image sample, I crawled all 7 million JPEG images that were served by the top 300k websites in the May 1st HTTP Archive crawl and came out with....wait for it.... still only 7% (I have a lot of other cool stats from that image crawl to share but that will be in a later post).

Is The User Experience Measurably Better?


Before setting out and recommending that everyone serve progressive JPEGs I wanted to get some hard numbers on how much of an impact it would have on the user experience.  I put together a pretty simple transparent proxy that could serve arbitrary pages, caching resources locally and transcoding images for various different optimizations.  Depending on the request headers it would:

  • Serve the unmodified original image (but from cache so the results can be compared).
  • Serve a baseline-optimized version of the original image (jpegtran -optimize -copy none).
  • Serve a progressive optimized version (jpegtran -progressive -optimize -copy none).
  • Serve a truncated version of the progressive image where only the first 1/2 of the scan lines are returned (more on this later).
I then ran a suite of the Alexa top 2,000 e-commerce pages through WebPagetest comparing all of the different modes on a 5Mbps Cable and 1.5Mbps DSL connection.  I first did a warm-up pass to populate the proxy caches and then each permutation was run 5 times to reduce variability.

The full test results are available as Google docs spreadsheets for the DSL and Cable tests.  I encourage you to look through the raw results and if you click on the different tabs you can get links for filmstrip comparisons for all of the URLs tested (like this one).

Since we are serving the same bytes, just changing HOW they are delivered, the full time to load the page won't change (assuming an optimized baseline image as a comparison point).  Looking at the Speed Index, we saw median improvements of 7% on Cable and 15% on DSL.  That's a pretty huge jump for a fairly simple serving optimization (and since the exact same pixels get served there should be no question about quality changes or anything else).

Here is what it actually looks like:



Some people may be concerned about the extremely fuzzy first-pass in the progressive case.  This test was just done with using the default jpegtran scans.  I have a TODO to experiment with different configurations to deliver more bits in the first scan and skip the extremely fuzzy passes.  By the time you get to 1/2 of the passes, most images are almost indistinguishable from the final image so there is a lot of room for improving the experience.

What this means in WebPagetest


Starting today, WebPagetest will be checking every JPEG that is loaded to see if it is progressive and it will be exposing an overall grade for progressive JPEGs:

The grade weights the images by their size so larger images will have more of an influence.  Clicking on the grade will bring you to a list of the images that were not served progressively as well as their sizes.

Another somewhat hidden feature that will now give you a lot more information about the images is the "View All Images" link right below the waterfall:


It has been beefed up and now displays optimization information for all of the JPEGs, including how much smaller it would be when optimized and compressed at quality level 85, if it was progressive and the number of scans if it was:

The "Analyze JPEG" link takes you to a view where it shows you optimized versions of the image as well as dumps all of the meta-data in the image so you can see what else is included.

What's next?


With more advanced scheduling capabilities coming in HTTP 2.0 (and already here with SPDY), sites can be even smarter about delivering the image bits and re-prioritize progressive images after enough data has been sent to render a "good" image and deliver the rest of the image after other images on the page have had a chance to display as well.  That's a pretty advanced optimization but it will only be possible if the images are progressive to start with (and the 7% number does not look good).

Most image optimization pipelines right now are not generating progressive JPEGs (and aren't stripping out the meta-data because of copyright concerns) so there is still quite a bit we can do there (and that's an area I'll be focusing on).

Progressive JPEGs can be built with almost arbitrary control over the separate scans.  The first scan in the default libjpeg/jpegtran setting is extremely blocky and I think we can find a much better balance.

At the end of the day, I'd love to see CDNs automatically apply lossless image optimizations and progressive encoding for their customers while maintaining copyright information.  A lot of optimization services already do this and more but since the resulting images are identical to what came from the origin site I'm hoping we can do better and make it more automatic (with an opt-out for the few cases where someone NEEDS to serve the exact bits).

Tuesday, May 28, 2013

What makes for a good talk at a tech conference?

I have the pleasure of helping select the talks for a couple of the Velocity conferences this year and after looking at several hundred proposals it is clear that there are widely varying opinions from submitters on what would make for a good talk and also a lot of cases where the topics may be good but the submitter may have the wrong focus. I'm certainly not an expert on the topic but I think that if you just keep one point in mind when submitting a talk for a tech conference (any tech conference) your odds of getting a talk accepted will go up exponentially:

It is all about the attendees! Period!

When you're submitting a talk, try to frame it in such a way that each attendee will get enough value out of your talk to justify the expense of them attending the conference (conference costs, travel, opportunity cost, etc). If all of the talks meet that criteria then you end up with a really awesome conference.

If you are talking about a technique or toolchain, make sure that attendees will be able to go back to their daily lives and implement what you talked about. More often than not that means the tools need to be readily available (bonus points for open source) and you need to provide enough information that what you did can be replicated. These kinds of talks are also a lot better if they are presented by the team that implemented the "thing" and not by the vendor providing the toolchain. For most tech conferences, the attendees are hands-on so hearing from the actual dev/ops teams that did the work is optimal.

Make sure you understand the target audience as well and make the talks generally applicable. For something like Velocity where the attendees are largely web dev/ops with a focus on scaling and performance, make sure your talk is broadly applicable to them. A talk on implementing low-level networking stacks will not work as well as a talk about how networking stack decisions and tuning impact higher-level applications for example.

What doesn't work?

  • Product pitches (there are usually sponsored tracks and exhibit halls for that kind of thing)
  • PR. This is not about getting you exposure, it is about educating the attendees.

Friday, December 28, 2012

Motivation and Incentive

My favorite web performance article in 2012 was this one from Kyle Rush about the work done on the Obama campaign's site during the 2012 election cycle.  They did do some cool things but it wasn't the technical achievements that got my attention, it was the effort that they put into it.  They re-architected the platform to serve as efficiently as possible by serving directly from the edge and ran 240 A/B tests to evolve the site from it's initial look and feel to the final result at the end of the campaign (with a huge impact on the donations as a result of both efforts).

Contrasted to the Romney tech team that appears to have contracted a lot of the development out and spent quite a bit more to do it (wish there was an easy way to compare the impact on the funds raised but the donation patterns across the parties is normally very different).

What I like most is that it demonstrates very clearly how having someone's motivations aligned with the "business" goals is absolutely critical and those are the situations where you usually see the innovative work and larger efforts put in.  I see this time and time again in the tech industry and I'm sure it applies elsewhere but it is absolutely critical to be aware of in tech.

Fundamentally that is what DevOps is all about and why the classical waterfall development model is broken:

  • Business identifies a "product need"
  • Product team specs-out a product to fill that need
  • Dev team builds what was specified by the product team (usually as exactly to the requirements as possible, including fussing about pixel-perfect matching the mock-up designs)
  • Dev team throws the resulting product over the wall to QA to test and verify against the requirements
  • QA team throws the final product over the wall to the Ops team to run
    • Usually forever and long after the dev and product teams have moved on
    • Usually doing all sorts of crazy things to keep the system running (automatic restarts, etc)
By bringing the various teams together and making them have skin in the game they are incented to produce a product that is easier to implement, scales and runs reliably (getting developers on pager duty is easily the fastest way to get server code and architectures fixed).

As you look across your deployment, small site or large, what are the motivating factors for each of the teams responsible for a given component?

The Hosting

If you are not running your own servers then there is a good chance that the company that is running them isn't incentivised to optimize for your needs.

In the case of shared hosting, the hosting provider makes their money by running as many customers on as little hardware as possible.  Their goal is to find the edge at which point people start quitting because things perform so badly and make sure they stay as close to that as possible without going beyond it.  When I see back-end performance issues with sites, they are almost always on shared hosting and at times it can be absolutely abysmal.

With VPS or dedicated hosting they usually get more money as you need more compute resources.  Their incentive is to spend as little time as possible supporting you and certainly not to spend time tuning the server to make it as fast as possible.

If you are running on someone else's infrastructure (which includes the various cloud services so it is increasingly likely that you are), I HIGHLY recommend that you have the in-house skills necessary to tune and manage the servers and serving platforms.  You need remote hands-and-eyes to deal with things like hardware failures, but outsourcing the management will hardly ever be a good idea.  Having someone on your team who is incented to get as much out of the platform as possible will save you a ton of money in the long term and result in a much better system.

Site Development

You should have the skills and teams in-house to build your sites.  Period.  If you contract the work out then the company you work with is usually working to do as little work as possible to deliver exactly what you asked for in the requirements document.  Yes, they will probably work with you a bit to make sure it makes sense but they are not motivated by how successful the resulting product will be for your business - once they get paid they are on to the next contract.

I see it all too often.  Someone will be looking at the performance of their site and there are huge issues, even with some of the basics but they can't fix it.  They contracted the site out and what they were delivered "looks" like what they were asked to deliver and functions perfectly well but architecturally it is a mess.

There are great tools available to help you tune your sites (front and back-end) but you need to have the skills in-house to do it.  Just like with the Obama campaign, they focused on continuously optimizing the site for the duration of the campaign because they were a part of the team and were motivated by the ultimate business goals, not by some requirements document that they needed to check all of the boxes to.

Maybe I'm a bit biased since I'm a software guy who also likes to do the end-to-end architectures and system tuning but I absolutely believe that these are skills you need to have or develop as part of your actual team in order to be successful.  Contracting out for expertise also makes sense as long as they are educating your team as you go along and it's more about the education and getting you on the right track.

CDNs

Maybe it's my tinfoil hat getting a bit tight, but given that CDNs usually bill you for the number of bits they serve on your behalf, it doesn't feel like they are particularly motivated to make sure you are only serving as many bits as you need to.  Things like always gzipping content where appropriate is one of the biggest surprises.  It seems like a no-brainer but most CDN's will just pass-through whatever your server responds with and won't do the simple optimization of gzipping as much as possible (most of them have it as an available setting but it is not enabled by default).

Certainly you don't want to be building your own CDN but you should be paying very careful attention to the configuration or your CDN(s) to make sure the content they are serving is optimized for your needs.

Motivations/Incentives

Finally, just because you have the resources in-house doesn't mean that their motivations are aligned with the business.  In the classic waterfall example, the dev teams are not normally motivated to make sure the systems they build are easy to operate (resilient, self-healing, etc).  In a really small company where the tech people are also founders then it is pretty much a given that their incentives are very well aligned but as your company gets larger it becomes a lot harder to maintain that alignment.  Product dogfooding, DevOps and Equity sharing are all common techniques to try to keep the alignment which is why you see all of those so often in the technical space.


OK, time to put away the soapbox - I'd love to hear how other people feel about this, particularly counter arguments where it does make sense to completely hand-off responsibility to a third-party.

Tuesday, November 20, 2012

Clearing IE's Caches - Not as simple as it appears

I've spent the last week or so getting the IE testing in WebPagetest up to snuff for IE 10.  I didn't want to launch the testing until everything was complete because there were some issues that impacted the overall timings and I didn't want people to start drawing conclusions about browser comparisons until the data was actually accurate.

The good news is that all of the kinks have been ironed out and I will be bringing up some Windows 8 + IE 10 VM's over the Thanksgiving holidays (have some new hardware on the way because the current servers are running at capacity).

In the hopes that it helps other people doing browser testing I wanted to document the hoops that WebPagetest goes through to ensure that "First View" (uncached) tests are as accurate as possible.

Clearing The Caches

It's pretty obvious, but the first thing you need to make sure you are doing when you are going to do first view tests is to clear the browser caches.  In the good old days this pretty much just meant the history, cookies and object caches but browsers have evolved a lot over the years and they store all sorts of other data and heuristic information that helps them load pages faster and to properly test first view page loads you need to nuke all of them.  

For Chrome, Firefox and Safari it is actually pretty easy to clear out all of the data.  You can just delete the contents of the profile directory which is where each browser stores all of the per-user data and you essentially get a clean slate.  There are a few shared caches that you also want to make sure to clear out:

DNS Cache - WebPagetest clears this by calling DnsFlushResolverCache in dnsapi.dll and falling back to running "ipconfig /flushdns" from a shell.

Flash Storage - Delete the "\Macromedia\Flash Player\#SharedObjects" directory

Silverlight Storage - Delete the "\Microsoft\Silverlight" directory

That will be enough to get the non-IE browsers into a clean state but IE is a little more difficult since it is pretty tightly interwoven into the OS as we learned a few years back.

The first one to be aware of is the OS certificate store.  Up until a few months ago WebPagetest wasn't clearing that out and it was causing the HTTPS negotiations to be faster than they would be in a truly first view scenario.  On Windows 7, all versions of IE will do CRL and/or OCSP validation of certificates used for SSL/TLS negotiation.  That validation can be EXTREMELY expensive ( several round trips for each validation) and the results were being cached in the OS certificate store.  This made the HTTPS performance in IE appear faster than it really was for true first view situations.

To clear the OS certificate stores we run a pair of commands:

certutil.exe -urlcache * delete
certutil.exe -setreg chain\\ChainCacheResyncFiletime @now

IE 10 introduced another cache where it keeps track of the different domains that a given page references so it can pre-resolve and pre-connect to them (Chrome has similar logic but it gets cleared when you nuke the profile directory).  No matter how you clear the browser caches (even through the UI), the heuristic information persists and the browser would pre-connect for resources on a first view.

When I was testing out the IE 10 implementation the very first run of a given URL would look as you would expect (ignore the really long DNS times - that's just an artifact of my dev VM):


But EVERY subsequent test for the same URL, even across manual cache clears, system reboots, etc would look like this:


That's all well and good (great actually) for web performance but a bit unfortunate if you are trying to test the uncached experience because DNS, socket connect (and I assume SSL/TLS negotiation) is basically free and removed from the equation.  It's also really unfortunate if you are comparing browsers and you're not clearing it out because it will be providing an advantage to IE (unless you are also maintaining the heuristic caches in the other browsers).

Clearing out this cache is what has been delaying the IE 10 deployment on WebPagetest and I'm happy to say that I finally have it under control.  The data is being stored in a couple of files under "\Microsoft\Windows\WebCache".  It would be great if we could just delete the files but they are kept persistently locked by some shared COM service that IE leverages.

My current solution to this is to terminate the processes that host the COM service (dllhost.exe and taskhostex.exe) and then delete the files.  If you are doing it manually then you also need to suspend the parent process or stop the COM+ service before terminating the processes because they will re-spawn almost immediately.  If anyone has a better way to do it I'd love feedback (the files are mapped into memory so NtDeleteFile doesn't work either).

Browser Initialization

Once you have everything in a pristine state with completely cleared profiles and caches you still have a bit more work to do because you want to test the browser's "first view" performance, not "first run" performance.  Each of the browsers will do some initialization work to set up their caches for the first time and you want to make sure that doesn't impact your page performance testing.  

Some of the initialization happens on first access, not browser start up so you can't just launch the browser and assume that everything is finished.  WebPagetest used to start out with about:blank and then navigate to the page being tested but we found that some browsers would pay a penalty for initializing their caches when they parsed the first HTML that came in and they would block.  I believe Sam Saffron was the first to point out the issue when Chrome was not fetching sub-resources as early as it should be (on a page where the head was being flushed out early).  In the case of the IE connection heuristics it would also pay a particularly expensive penalty at the start of the page load when it realized that I had trashed the cache.

In order to warm up the various browser engines and make sure that everything is initialized before a page gets tested WebPagetest navigates to a custom blank HTML page at startup.  In the WebPagetest case that page is served from a local server on the test machine but it is also up on webpagetest.org: http://www.webpagetest.org/blank.html if you want to see what it does.  It's a pretty empty html page that has a style and a script block just to make sure everything is warmed up.

Wrap-up

Hopefully this information will be helpful to others who are doing browser performance testing.  

You should also be careful taking browser-browser comparisons as gospel.  As you can see, there are a lot of things you need to do to get to an apples-to-apples comparison and even then it isn't necessarily what users experience.  Browsers are adding more heuristics, pre-connecting and even pre-rendering of pages into the mix and most of the work in getting to a clean "first view" defeats a lot of those techniques.

Wednesday, August 22, 2012

FCC Broadband Progress Report

The FCC released their eighth broadband progress report yesterday.
The most interesting part for me is when you get to page 45 and they start talking about actual adoption (in the US), as in the speeds that people are actually subscribing to, not what is available or offered.  Their buckets aren't all that granular and the data they used to build the report comes from June 2011 but they give you a good idea of what the spread looks like:
64.0% - At Least 768 kbps/200 kbps
40.4% - At Least 3 Mbps/768 kbps
27.6% - At Least 6 Mbps/1.5 Mbps

Effectively that means that 36% of the households where broadband is available do not subscribe to fixed-line broadband.  If we use the 64% that subscribe to at least some form of fixed-line broadband offering we get:
37% - Less than 3 Mbps/768 kbps
63% - At Least 3 Mbps/768 kbps
43% - At Least 6 Mbps/1.5 Mbps

With WebPagetest's default 1.5 Mbps/768 kbps DSL profile falling in the 37% of the population it is probably hitting somewhere around the 75th percentile.  Time to increase it to something closer to the median (say switch to the 5/1 Mbps Cable)?
I've generally been a fan of skewing lower because you will be making things faster for more of your users and you might be missing big problems if you don't test at the slower speeds but I'm open to being convinced otherwise.