Custom Software Development and Engineering

Demonstrating web performance at Chrome Dev Summit 2016

Stickman Ventures at Chrome Dev Summit

Web performance can seem like an elusive topic to developers and managers alike. How does one gauge performance and how do we act on that data? Is it the same everywhere? How can I take advantage of this today?

At Chrome Dev Summit this year (and in years past for that matter), this was a topic of constant mention. New tools, new techniques, updated paradigms to support not just developers but to make the web better for users on poor connections and mid-tier devices.

To help support and demonstrate what performance looks like we built a small game called DragRace, which pits an site enterered into the kiosk against a random site competitor. Originally built in August to show businesses and organizations how performance impacts their users, we reached out to the folks working at Google to see what they thought.

They said bring it out to Chrome Dev Summit and run that demo. Sounded good to us. So, we cranked it up to 11.

Building a DragRace worthy of Chrome Dev Summit

Our original demo, seen in the photo below, was focused around how businesses might want to take a look at what they’re doing on the web. The demo was powered on some of our older 46 inch kiosks and ran against a baseline that we found important for about 85% of businesses on the web today. The demo resonated with people.

For Chrome Dev Summit however, we decided the demo was going to simply need more pop. Something that was on the cutting edge, that would entice developers, UI designers, and businesses alike to not only interact with the kiosk but also listen to us explain web performance. We decided on a few things:

  1. Integrate Lighthouse scoring and focus on paints and interactions so developers understand what’s being discussed in Chrome Dev Summit talks.
  2. Keep our Pagespeed Insights API, since it was readily accessible and can still offer business case value that developers might need to show management.
  3. Optimize for speed: animations, server response, everything.

Might seem simple on the surface. “I can do that right now Justin!”. Alas, everything looks simple at a distance.

A banner day of web perf testing at Chrome Dev Summit

Go big or go home, hardware edition

Building custom kiosks is not an unknown for us, but every project is has different requirements. In this case, we wanted something to really shine, so we went bigger than usual:

  1. (2) 55 inch 4K screens.
  2. (2) ASUS CHROMEBOX-M004U.
  3. (2) Custom WASD keyboards.
  4. (2) Custom 3D printed meshes matching the Chrome Dev Summit design spec.
  5. (2) Chief PFMUB 15 deg stands.
  6. (2) Custom fabricated keyboard stands.

55 inch screens for a kiosk are nearly overkill given the view distance, but damn do they look lovely. If you’re thinking “Wow, I didn’t know a Chromebox would power a 4K screen”, consider Chrome OS if you haven’t. We love them for this type of work.

Sans farming out our custom keyboards to WASD (which people really loved), most of the rest was either modified at the office or at our shop.

Setting up at Chrome Dev Summit

Running Lighthouse in the cloud with Chrome headless_shell

For those who haven’t heard of Lighthouse, it’s a tool for auditing performance metrics for the latest generation of progressive web apps (or sites in general). Lighthouse is amazing and really has great features and potential for developers to better measure just how their web app performs.

Today, you can run Lighthouse on the command line or via a Chrome extension. In both cases, we could have thrown together some smoke and mirrors this happen at the machine/kisok level, but that’s not cool. Anyone can do that.

So, we decided to run Lighthouse on Google Cloud.

“Wait, how does one run Lighthouse without Chrome and a window?” you may be asking.

Nothing building Chromium from source can’t solve. In the Chromium source, there is a project underway called Headless Chromium, a library for running Chromium in a headless/server environment. This would work just for this case.

As we sometimes do, we fired up a build server, pulled the source, and built a docker image we could use for this purpose. That docker image is available now for those who’d like to try using this early version.

With this deployed container deployed to Google Cloud, we simply needed to write a bit of API using lighthouse that we could call as needed to test sites. While wrapping the CLI might be an option, we instead simply used const lighthouse = require('lighthouse'); and used a custom audit configuration to run a limited set of tests like so:

// simplified for brevity
return lighthouse(testUrl, lighthouseOptions, auditConfig)
 .then(res => ourMetrics.prepareData(res));

With the Pagespeed Insights API readily available through the Google API console, it we had our two needed APIs. Now it was time to handle the frontend.

Lighthouse running in Google Cloud

4K, PRPL, and Polymer oh my

Our art department did a fantastic job of coming up with the overall look based on the color pallete that was in the spec. However, that look came at a cost in terms of overall asset size; our design choice wasn’t flat and did not easily lend itself to the often utilized SVG (we went with a more 1960’s racing poster style).

Even though we knew we’d be running these on kiosks, we wanted to be smart about loading just in case we had to make updates on the fly or as we hope, we release the demo at a wider scale. Enter Polymers’ PRPL pattern.

I heart Polymer and web components powered by the platform. For those who have not heard of the PRPL pattern, PRPL is all about optimizing delivery for end user responsiveness:

  • Push critical resources for the initial route.
  • Render initial route.
  • Pre-cache remaining routes.
  • Lazy-load and create remaining routes on demand.

Why is this important in regards to our kiosks? 4K assets at 3 feet need to be crisp and not as heavily compressed as we’d normally do for the web. We push them as fast as we can and make sure that our Service Worker handles them. Once it caches our rather large images, it won’t go back to the network again for them (a design choice on our part; if we need to push a revision, we increment our pre-cache version and the filename of the asset).

In terms of lazy loading, we know that our result web components, needed to display our scorecard after the race, don’t need to initially load. Same goes for a our 30 or so pre-cached competitors; we don’t have to load that data until we initially need it after the first DragRace is run.

The proof of course is in the numbers on regular 3G first load; first meaningful paint at 721.5ms, last visual change at 1988ms, a speed index at 1106, and a time to interactive at 1464.4ms. When Service Worker kicks in on any additional load, it’s nothing but speed.

The end result: fun and learning

The overall goal was to not only show the bleeding edge of what you can do, but open the coversation about what web performance means. To that end, we made a few decisions early on that protected the concept of learning:

  1. We didn’t enable the leaderboard.
  2. We didn’t enable realtime challenge mode (kiosk v kiosk).
  3. We collected no identifying information, only anonymous telemetry.
  4. We didn’t enable the device mirror renderer (yeah…it’ll technically mirror challenger site to any device in realtime via Firebase).

We wanted developers to be engaged and informed, not afraid to test. To quote our own Paul Perrone, whom worked on the demo and helped work Chrome Dev Summit, “The numbers are a guidepost to help you get started with making your web app faster. Don’t get hung up on the score.”

With over 350 unique sites tested, and myself, Laura, and Paul very nearly horse for talking to folks for two straight days, we think this was the right and successful approach.

Did the technology hold up? You bet! Sans a few sites of extrodinary size (we had a site run that was 30+ MB’s!!!) that tested our timeouts and a person who popped DevTools and a crosh window to see if we were running these tests local (nope, all in the cloud!), all went well.

A huge thank you

Everyone here at Stickman Ventures would love to thank the folks over at Google Chrome (Paul, Rob, et al), Google’s event team (Vanessa, Clare, Jamie), and the wonderful staff at the SFJAZZ Center for working with us on all the logisitics required to pull this off. You all are amazing.

I also want to give a shout out to our staff who worked to put this project together and make the demo run smoothly. Walter, Laura, Paul, David, James…teamwork!

Laura and Paul working the demo booth at Chrome Dev Summit

Source code and things

We are working at releasing all the various pieces as soon as we can for the Polymer frontend and API. In the mean time, two of the most requested pieces and how they work are available now to get your started:

  1. Chrome headless_shell in docker: justinribeiro/chrome-headless
  2. Lighthouse core, Mocha, and Travis CI for testing for perf: justinribeiro/lighthouse-mocha-example

Stay tuned for more. Until next time, build things and go fast.

Ready to start?

Get in touch. We're ready to listen.