Velocity 2010 – Facebook Performance Shenanigans

Pipelining, Progressive Enhancement, and More: Making Facebook Twice as Fast by Jason Sobel (Facebook), Changhao Jiang (Facebook)

It’s the last session of Velocity already! The companies are tearing down their booths, people are escaping to the airport. Today went really, really fast. The room is still mostly full though!

As we’ve heard before, they have loads of users.  They have a central performance team but also distributed and embedded throughout the company.

The core site speed team started working on PHP speed.  Then they read Steve Souder’s book and realized “Oh, crap…”

They are working on a “perflab” to measure performance impacts of all changes.  And detect regressions.

What are the three things they measure at Facebook?

  1. Server time
  2. Network time
  3. Client/render time

What are we optimizing for?  Shouldn’t be any of those three.  Optimize for people.  That doesn’t even mean end user response time – that means impression of performance.

How fast is Facebook?  Well, determine what the core of the experience is.  What do people look at first, what defines the experience?  Lazy load the rest of that crap.

Metric: Time To Interact (TTI).  It’s a very custom metric.  When is the user getting value out of the site?  This is subjective and requires you to really know your users.

For this, the critical pieces have to be there and have to WORK – you can’t just display it and have the functionality not there yet.  You can’t pick visible but not functional yet.

Techniques used to speed things up:

  1. Early flush.  Get them a list of the crucial elements.
  2. Components.  Pages used the same components with different names, the color blue was defined a thousand times.  Make a reusable set of visual components that can appear on any page and share the same CSS rules.  Besides enforcing visual standards, you can optimize them and then reuse them.  Theirs are a grid, an image block, some buttons, page headers…
  3. JavaScript!  We love it, but it is hard.  They wrote a lot before they knew what they were doing.  They have something called “primer.”  There’s a simple JS library that lives in the head and can bootstrap the rest of the javascript and respond to simple stuff devs were writing over and over again.  An event handler that can do a popup, get and insert content, or do a form submit.  And go get other javascript.  Then you tag something with a rel=”dialog” and it pops a dialog.  And once the page is done you can go get the stuff instead of making it on demand.  “async” gets content. In the feedback interface; Like and View and Delete use it.
  4. BigPipe is an attempt to rethink how we present pages.  The problem is the page generation, network latency, and page rendering being serial.  They render personalized pages and have to query several back end services to make the page.  The page is waiting on the slowest back end query.  So pipeline it out!  Decompose pages into “pagelets” and pipeline them through different execution stages in the server and browser.  They give priorities to different pagelets.
    How does it work?  First you get a nearly empty doc.  In the head, script src bigpipe.js.  Then there are divs on the page with IDs; a template with the logical structure of the page.  For each pagelet, it’s flushed separately in a script tag, JSON encoded.  BigPipe on th  client downloads CSS for the pagelet, displays it, downloads JS, and executes onLoad()s.
    This gave them a 2x improvement in perceived latency (defined by TTI) across all browsers.
    What about search engines?  Well, first of all, for devs to use the pipe, they have to write pagelets, and they have a pagelet abstraction for them to use.  Only has three functions: initialize, prepare, and render.  To pipeline you create a BigPipe instance, specify your page layout and place holders, add pagelets to the pipe (source file and wrapper id) and then call render.  So you can do pipeline, singleflush, parallel, or prepare models.  One parameter in Bigpipe::GetInstance controls it. Use singleflush for search and non-JS stuff.  Preparelets you batch multiple pages.  Parallel lets you use multiple threads for different pagelets (at the cost of server resources!).

Whew!  All this was a success – on Dec 22 they got their goal of making Facebook twice as fast.

Combine with ESIs for even more fun!

Thoughts from the Site Speed team:

To build a culture of performance…  Make tshirts!  They gave shirts to those who made improvements.

  1. Getting the right metrics, that people buy into, then they’ll work on optimizing it.
  2. Build the right abstraction to make the site fast by default, and if devs use them then you are fast without them having to do loads of work.
  3. Partnership.  If other teams are committed you’ll have success.  Find the people that get it and work with them.  Ignore the ignorant.

Final thought – spriting!  He likes spriting.it’s crazy but the platform is a little broken so you have to do stuff like that.  But let’s fic the platform so you don’t hav eto do crazy stuff.  Fast by default!!!

And that’s a wrap for Velocity 2010!  Next, stuff from DevOpsDays, and my thoughts and reflections on what we’ve learned!

Leave a comment

Filed under Conferences, DevOps

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.