Check out the Velocity 2010 flickr set! And YouTube channel!
Time for lightning demos.
A HTTP browser proxy that does the usual waterfalls from your Web pages. Version 7 is out! You can change fonts. They work in IE and Firefox both, unlike the other stuff.
Rather than focus on ranking like YSlow/PageSpeed, they focus on showing specific requests that need attention. Same kind of data but from a different perspective. And other warnings, not all strictly perf related. Security, etc. Exports and consumes HAR files (the new standard for http waterfall interchange).
Based on AOL Pagetest, an IE module, but hosted. Can be installed as a private site too. It provides obejct timing and waterfalls. Allows testing from multiple locations and network speeds, saved them historically. Like a free single-snapshot version of Keynote/Gomez kinds of things.
Shows stuff like browser CPU and bandwidth utilization and does visual comparisons, showing percieved performance in filmstrip view and video.
And does HAR import/export! Ah, collaboration.
The CPU/net metrics show some “why” behind gaps in the waterfalls.
The filmstrip side by side view shows different user experiences very well. And you can do video, as that is sexy.
They have a new UI built by a real designer (thanks Neustar!).
But what about Chrome, you ask? We have an extension for that now. Similar to PageSpeed. The waterfall timeline is beautiful, using real “Google Finance” style visualization. The other guys aren’t like RRDTool ugly but this is super purty.
It will deobfuscate JavaScript and integates it with Eclipse!
They’re less worried about network waterfall and more about client side render. A lot of the functionality is around that.
You can send someone a Speed Trace dump file for debug.
Are you tired of being browser dependent? Fiddler has your back.
New features… Hey, platform preview 3 for IE9 is out. It has some tools for capture and export.; it captures traffic in a XML serialised HAR. Fiddler imports JSON HAR from norms and the IE9 HAR! And there’s HAR 1.1! Eeek. And wcat. It imports lots of different stuff in other words.
I want one of these to take in Wireshark captures and rip out all the crap and give me the HTTP view!
FiddlerCap Web recorder (fiddlercap.com) lets people record transactions and send it to you.
Side by side viewingwith 2 fiddlers if you launch with -viewer.
There’s a comparison extension called differ. Nice!
You can replay captures, including binarieis now, with the AutoResponder tab. And it’ll play latency soon.
We still await the perfect HTTP full capture and replay toolchain… We have our own HTTP log replayer we use for load tests and regression testing, if we could do this but in volume it would rock…
Caching analysis. FiddlerCore library you can put in your app.
Now, Bobby Johnson of Facebook speaks on Moving Fast.
Building something isn’t hard, but you don’t know how people will use it, so you have to adapt quickly and make faster changes.
How do you get to a fast release cycle? Their biggest requirement is “the site can’t go down.” So they go to frequent small changes.
Most of the challenge when something goes wrong isn’t fixing it, it’s finding out what went wrong so you can fix it. Smaller changes make that easier.
They take a new thing, push some fake traffic to it, then push a % of user traffic, and then dial back if there’s problems.
If you aren’t watching something, it’ll slip. They had their performance at 5s; did a big improvement and got it to 2.5. But then it slips back up. How do you keep it fast (besides “don’t change anything?”) They changed their organization to make quick changes but still maintain performance.
What makes a site get slow? It’s not a deep black art. A lot of it is jus tnot paying attention to your performance – you can’t foresee new code to not have bugs or not have performance problems.
- New code is slow
- More code is slow
“Age the code” by allocating time to shaking out performance problems – not just before, but after deploy.
- Big pipe – break the page into small pieces and pipelines it.
- Primer – a JavaScritp library that bootstraps by dl’ing the minimum first.
Both separate code into a fast path and a slow path and defaults to the slow path.
I have a new “poke” feature I want to try… I add it in as a lazy loaded thing and see if anyone cares, before I spend huge optimization time.
It gets popular! OK, time to figure out performance on the fast path.
So they engineer performance per feature, which allows prioritization.
You can have a big metric collection and reporting tool. But that’s different from alerting tools. Granularity of alerting – no one wants to get paged on “this one server is slow.” But no one cares about “the whole page is 1% slower” either. But YOUR FEATURE is 50% slower than it was – that someone cares about. Your alert granularity needs to be at the level of “what a single person works on.”
No one is going to fix things if they don’t care about it. And also not unless they have control over it (like deploying code someone else wrote and being responsible for it breaking the site!). And they need to be responsible for it.
They have tried various structures. A centralized team focused on performance but doesn’t have control over it (except “say no” kinds of control).
Saying “every dev is responsible for their perf” distributes the responsibility well, but doesn’t make people care.
So they have adopted a middle road. There’s a central team that builds tools and works with product teams. On each product team there is a performance point person. This has been successful.
Lessons learned:
- New code is slow
- Give developers room to try things
- Nobody’s job is to say no
Joshua from Strangeloop on The Mobile Web
Here we all know that performance links directly to money.
But others (random corporate execs) doubt it. And what about the time you have to spend? And what about mobile?
We need to collect more data
Case study – with 65% performance increase, 6% order size increase and 9% conversion increase.
For mobile, 40% perf led to 3% order size and 5% conversion.
They have a conversion rate fall-off by landing page speed graph, so you can say what a 2 second improvement is worth. And the have preliminary data on mobile too.
I think he’s choosing a very confusing way to say you need mttrics to establish the ROI of performance changes. And MOBILE IS PAYING MONEY RIGHT NOW!
Cheryl Ainoa from Yahoo! on Innovation at Scale
The challenges of scale – technical complexity and outgrowing many tools and techniques, there are no off hours, and you’re a target for abuse.
Case Study: Fighting Spam with Hadoop
Google Groups was sending 20M emails/day to Taiwan and there’s only 18M Internet users in Taiwan. What can help? Nothing existing could do that volume (spamcop etc.) And running their rules takes a couple days. So they used hadoop to drive indications of “spammy” groups in parallel. Cut mail delivered by 5x.
Edge pods – small compute footprints to optimize cost and performance. You can’t replicate your whole setup globally. But adding on to a CDN is adding some compute capability to the edge in “pods.” They have a proxy called YCPI to do this with.
And we’re out of time!