Tag Archives: velocityconf13

Velocity 2013 Day 2 Liveblog – Application Resilience Engineering and Operations at Netflix

Application Resilience Engineering and Operations at Netflix  by Ben Christensen (@benjchristensen)

Netflix and resilience.  We have all this infrastructure failover stuff, but once you get to the application each one has dozens of dependencies that can take them down.

Needed speed of iteration, to provide client libraries (just saying “here’s my REST service” isn’t good enough), and a mixed technical environment.

They like the Bulkheading pattern (read Michael Nygard’s Release it! to find out what that is). Want the app to degrade gracefully if one of its dozen dependencies fails. So they wrote Hystrix.

1. Use a tryable semaphonre in front of every library they talk to. Use it to shed load. (circuit breaker)

2. Replace that with a thread pool, which adds the benefit of thread isolation and timeouts.

A request gets created and goes throug hthe circuit breaker, runs, then health gets fed back into the front. Errors all go back into the same channel.

The “HystrixCommand” class provides fail fast, fail silent (intercept, especially for optional functionality, and replace with an appropriate null), stubbed fallback (try with the limited data you have – e.g. can’t fetch the video bookmark, send you to the start instead of breaking playback), fallback via network (like to a stale cache or whatever).

Moved into operational mode.  How do you know that failures are generating fallbacks?  Line graph syncup in this case (weird). But they use instrumentation of this as part of a purty dashboard. Get lots of low latency granular metrics about bulkhead/circuit breaker activity pushed into a stream.

Here’s where I got confused.  I guess we moved on from Hystrix and are just on to “random good practices.”

Make low latency config changes across a cluster too. Push across a cluster in seconds.

Auditing via simulation (you know, the monkeys).

When deploying, deploy to canary fleet first, and “Zuul” routing layer manages it. You know canarying.  But then…

“Squeeze” testing – every deploy is burned as an AMI, we push it to perf degradation point to know rps/load it can take. Most lately, added

“Coalmine” testing – finding the “unknown unknowns” – an env on current code capturing all network traffic esp, crossing bulkheads. So like a new feature that’s feature flagged or whatever so not caught in canary and suddenly it starts getting traffic.

So when a problem happens – the failure is isolated by bulkheads and the cluster adapts by flipping its circuit breaker.

Distributed systems are complex.  Isolate relationships between them.

Auditing and operations are essential.

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 2 Liveblog: a baseline for web performance with phantomjs

The talk I’m most excited about for today is next! I made sure to get here early…

@wesleyhales from apigee and @ryanbridges from CNN

Quick overview on load testing tools- firebug, charles, har viewers and whatnot; but its super manual.
Better- selenium, but it’s old yo and not hip anymore.
There are services out there: harstorage.com, harviewer etc that you can use too.
Webpagetest.org is pimped again, but apparently caused an internal argument in CNN.

Performance basics
– caching
– gzip: don’t gzip that’s already compressed (like jpegs)
– know when to pull from cdn’s

Ah! New term bingo! “Front end ops”- aka sucks to code something and then realize there need to be a ton of things to do to make things perform even more. Continued definition:
– keep an eye on perf
– manager of builds and dependencies (grunt etc)
– expert on delivering content from server to browser
– critical of new http requests/file sizes and load times

I’m realizing that building front ends is a lot more like building server side code….

Wes recommends having a front end performance ops position and better analytics.

A chart of CNN’s web page load times is shown.

So basically, every time CNN.com is built by bamboo, the page load time is analyzed, saved and analyzed. They use phantomjs for this which became Loadreport.js.

Loadreport.wesleyhales.com is the URL for it.

Filmstrip is a cool idea that stores filmstrips of all pages loaded.
Speed reports is another visualization that was written.

Hard parts
– performance issues needs more thought; figure out your baseline early
– advertisers use document.write
– server location
– browser types: DIY options are harder
– CPU activity: use a consistent environment

All in all, this has many of the same concerns when you’re doing server side performance

CI setup
– bamboo
– Jenkins
– barebones Linux without x11
– vagrant

Demo was shown that used Travis ci as the ci system.

All in all, everyone uses phantomjs for testing; check it out; look at fluxui on github for more!

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 2 Liveblog: Performance Troubleshooting Methodology

Stop the Guessing: Performance Methodologies for Production Systems

Slides are on Slideshare!

Brendan Gregg, Joyent

Note to the reader – this session ruled.

He’s from dtrace but he’s talking about performance for the rest of us. Coming soon, Systems Performance: Enterprises and the Cloud book.

Performance analysis – where do I start and what do I do?  It’s like troubleshooting, it’s easy to fumble around without a playbook. “Tools” are not the answer any more than they’re the answer to “how do I fix my car?”

Guessing Methodologies and Not Guessing Methodologies (Former are bad)

Anti-Methods

Traffic light anti-method

Monitors green?  You’re fine. But of course thresholds are a coarse grained tool, and performance is complex.  Is X bad?  Well sometimes, except when X, but then when Y, but…” Flase positives and false negatives abound.

You can improve it by more subjective metrics (like weather icons) – onjective is errors, alerts, SLAs – facts.

see dtrace.org status dashboard blog post

So traffic light is intuitive and fast to set up but it’s misleading and causes thrash.

Average anti-method

Measure the average/mean, assume a normal-like unimodal distribution and then focus your investigation on explaining the average.

This misses multiple peaks, outliers.

Fix this by adding histograms, density plots, frequency trails, scatter plots, heat maps

Concentration game anti-method

Pick a metric, find another that looks like it, investigate.

Simple and can discover correlations, but it’s time consuming and mostly you get more symptoms and not the cause.

Workload characterization method

Who is causing the load, why, what, how. Target is the workload not the performance.

lets you eliminate unnecessary work. Only solves load issues though, and most things you examine won’t be a problem.

[Ed: When we did our Black Friday performance visualizer I told them “If I can’t see incoming traffic on the same screen as the latency then it’s bullshit.”]

USE method

For every resource, check utilization, saturation, errors.

util: time resource busy

sat: degree of queued extra work

Finds your bottlenecks quickly

Metrics that are hard to get become feature requests.

You can apply this methodology without knowledge of the system (he did the Apollo 11 command module as an example).

See the use method blog post for detailed commands

For cloud computing you also need the “virtual” resource limits – instance network caps. App stuff like mutex locks and thread pools.  Decompose the app environment into queueing systems.

[Ed: Everything is pools and queues…]

So go home and for your system and app environment, create a USE checklist and fill out metrics you have. You know what you have, know what you don’t have, and a checklist for troubleshooting.

So this is bad ass and efficient, but limited to resource bottlenecks.

Thread State Analysis Method

Six states – executing, runnable, anon paging, sleeping, lock, idle

Getting this isn’t super easy, but dtrace, schedstats, delay accounting, I/O accounting, /proc

Based on where the time is leads to direct actionables.

Compare to e.g. database query time – it’s not self contained. “Time spent in X” – is it really? Or is it contention?

So this identifies, quantifies, and directs but it’s hard to measure all the states atm.

There’s many more if perf is your day job!

Stop the guessing and go with ones that pose questions and seek metrics to answer them.  P.S. use dtrace!

dtrace.org/blogs/brendan

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 2 Liveblog: CSS and gpu cheat sheet

I was headed to the CSS and gpu talk by Colt McAnlis (#perfmatters on twitter)

CSS properties and their paint times aren’t free. Depending on what properties you use, you could end up with slow rendering speeds. Box shadows and border radius strokes are the slowest (1.09ms) per render. That is pretty crazy, and I didn’t realize that it could be that slow.

We’re mostly taking about CSS optimizations that can be used by using the gpu, CPU on chrome.

Kinds of Layering controls
– load time layer promotion: some elements get their own layer by default. (Ex canvas, plugins, video, I frame)
– assign time layer promotion: (translate z, rotatex/y/z)
– animations
– stacking context and relative scrolling

– Too many layers uses additional memory; and you fill up the gpu tile cache.
– chrome prepaints tiles that are visible and not yet visible.

Side note: Colt loves ducks, and is sad about losing his hair 😦

– large images resized take forever. The resized images aren’t cached in the gpu. Think more about this for mobile devices.

Tooling
– turn on show layer borders in devtools in chrome. It’ll help with translate z issues etc.
– use continuous paint mode to continuously paint the page to see

Takeaways
– gpu and layers helps with faster rendering
– too many layers is a bad idea
– CSS tags impact page loads and rendering

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 2 Liveblog – The Keynotes

Had some tasty Afghan food last night and turned in reasonably early to prepare for the deluge today!

So, the keynotes. Steve Souders & John Allspaw kick us off as the MCs. It’s streamed live so  you should be able to watch it (this will let you know what parts to skip… Hint, everything but the Swede.)

The wireless is completely borked.  I’m having to come back to my hotel room over lunch to upload this.  Boo.

Allspaw is rocking a New York shirt.  “New York!” Very light applause, lol.  There’s now a NYC Velocity, London, and China.  Maybe it’s my own MC style talking but there’s not near enough ass jokes.

Allspaw is the philosopher of the group. First night we were here, Gene Kim and I were talking with Marcus from Etsy about him.  Gene: “He’s a philosopher!  He’s a warrior poet!”  Me: “Yep, he sure Yodas that shit up!” Drinks were involved.

Go to bit.ly/VelocityFavorites and vote for your favorite books and stuff!

They also want speaker feedback, give 5 and get a signed O’Reilly book at 6 tonight! Ok, you asked for it…

What, Where And When Is Risk In System Design?

In what turned out to be the best part of all the keynotes, Johan Bergstrom fromn Lund U in Sweden spoke about risk in system design (when will Amazon go down again).

Is risk from unreliable components or from complexity?  Traditional risk evaluation is about determining the likelihood of every single failure event and its impact.

It’s reliable when all the parts work according to the rules; reductionist.

The most unreliable component is the human actor – that’s what gets blamed by AWS etc for outages.Exampleof monetizing tech debt/risk with incremental risk of outage * cost of outage.

So what do we do to mitigate this risk?  Redundant barriers, the defense in depth or “layers of Swiss cheese.”

Or reduce variability by removing humans from the mix. Process and automation.

But what if risk is a product of non-linear interactions and relations (complexity)?

An ecosystem model, hard to completely characterize and barriers may increase interactions.

So risk as a path dependent process and as a control problem.

Path dependency – software is so complex now no one can fully understand, evaluate, or test it.

Technical debt vs normalization of deviance

Control problem.  Have boundaries of unacceptable functionalityrisk, workload, and finances/efficiency. You can only know when you’ve crossed the risk boundary when you’ve passed it.  The other boundaries provide pressure to a least effort/most efficient solution.

risk and safety are both products of performance variability.

So to manage risk in this sense,

Keep talking about risk even when things look safe

  • Invite minority opinion and doubt
  • debate boundaries
  • monitor gap between work as prescribedand performed
  • Focus on how people make the tradeoffs guaranteeing safety

Hollnagel – Safety management is not about avoiding – it is about achieving

Which is it? We ask the wrong question ha ha!

Risk is a game played between values and frames of reference.

Make your values explicit.

slides at jbsafety.se

Keynote

Vik Chaudhary from Keynote for his annual sales pitch

I like Keynote and we’re a Keynote customer, but I like Keynote a little less every time I have to sit through this crap.

 Compuware

Alois Reitbauer on Compuware APM. “We do mobile now!” Another sales pitch.

 

 Obama for America

Kyle Rush on the Obama for America site (dir of tech, new yorker)

Started with small simple site, load balancer to 7 web notes and 2 payment nodes.

Added a reverse proxied payment API

Then went to Jekyll Ruby CMS and github for version control, static in S3

Added Akamai as a CDN, did other front end perf engineering

Much faster and lighter

optimize.ly for A/B testing and faster page had 14% higher conversion rate ($32M)

GTM failover to 2 regions under route 53 round robin

1101 front end deploys, 4k lines js, 240 a/b tests

 

Lightning demos!

Guy (@guypod)  from Akamai on Akamai IO, the Internet Observatory, check out Web-wide stats. Basically their massive Web logs as data graphs.

 

@ManishLachwani from Appurify on their mobile continuous integration and testing platform

Runtime HTML5 and native debugger for mobile.

100k SDK will be free.

 

@dougsillars from AT&T on Application Resource Optimizer (developer.att.com/ARO)

See data flow from app, suggest improvements

Takes pcap traces from mobile, grades against best practices

Nice, like ACE+YSlow for mobile.

 

 Making the Web Faster

Arvind Jain from Google on making the Web faster.

Peak connection speeds have tripled in 5 years

Latency going down, cable 26 ms avg

js speed improvements

But, pages are getting fatter – 1.5 MB average!!!

Net YOY is desktop 5% faster, mobile 30%.

devs will keep adding in till they hit about 3s

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 1 Liveblog – Hands-on Web Performance Optimization Workshop

OK we’re wrapping up the programming on Day 1 of Velocity 2013 with a Hands-on Web Performance Optimization Workshop.

Velocity started as equal parts Web front end performance stuff and operations; I was into both but my path lead me more to the operations side, but now I’m trying to catch up a bit – the whole CSS/JS/etc world has grown so big it’s hard to sideline in it.  But here I am!  And naturally performance guru Steve Souders is here.  He kindly asked about Peco, who isn’t here yet but will be tomorrow.

One of the speakers is wearing a Google Glass, how cute.  It’s the only other one I’ve seen besides @victortrac’s. Oh, the guy’s from Google, that explains it.

@sergeyche (TruTV), @andydavies (Asteno), and @rick_viscomi (Google/YouTube) are our speakers.

We get to submit URLs in realtime for evaluation at man.gl/wpoworkshop!

Tool Roundup

Up comes webpagetest.org, the great Web site to test some URLs. They have a special test farm set up for us, but the abhorrent conference wireless largely prevents us from using it. “It vill disappear like pumpkin vunce it is over” – sounds great in a Russian accent.

YSlow the ever-popular browser extension is at yslow.org.

Google Pagespeed Insights is a newer option.

showslow.com trends those webpagetest metrics over time for your site.

Real Page Tests

Hmm, since at Bazaarvoice we don’t really have pages per se, we’re just embedded in our clients’ sites, not sure what to submit!  Maybe I’ll put in ni.com for old times’ sake, or a BV client. Ah, Nordstrom’s already submitted, I’ll add Yankee Candle for devious reasons of my own.

redrobin.com – 3 A’s, 3 F’s. No excuse for not turning on gzip. Shows the performance golden rule – 10% of the time is back end and 90% is front end.

“Why is my time to first byte slow?”  That’s back end, not front end, you need another tool for that.

nsa.gov – comes back all zeroes.  General laughter.

Gus Mayer – image carousel, but the first image it displays is the very last it loads.  See the filmstrip view to see how it looks over time. Takes like 6 seconds.

Always have a favicon – don’t have it 404. And especially don’t send them 40k custom 404 error pages. [Ed. I’ll be honest, we discovered we were doing that at NI many years ago.] It saves infrastructure cost to not have all those errors in there.

Use 85% lossy compression on images.  You can’t tell even on this nice Mac and it saves so much bandwidth.

sitespeed.io will crawl your whole site

speedcurve is a paid service using webpagetest.

Remember webpagetest is open source, you can load it up yourself (“How can we trust your dirty public servers!?!” says a spectator).

Mobile

webpagetest has some mobile agents

httpwatch for iOS

1 Comment

Filed under Conferences, DevOps

Velocity 2013 Day 1 Liveblog – Using Amazon Web Services for MySQL at Scale

Next up is Using Amazon Web Services for MySQL at Scale. I missed the first bit, on RDS vs EC2, because I tried to get into Choose Your Weapon: A Survey For Different Visualizations Of Performance Data but it was packed.

AWS Scaling Options

Aside: use boto

vertical scaling – tune, add hw.

table level partitioning – smaller indexes, etc. and can drop partitions instead of deleting

functional partitioning (move apps out)

need more reads? add replicas, cache tier, tune ORM

replication lag? see above, plus multiple schemas for parallel rep (5.6/tungsten). take some stuff out of db (timestamp updates, queues, nintrans reads), pre-warm caches, relax durability

writes? above plus sharding

sharding by row range requires frequent rebalancing

hash/modulus based- better distro but harder to rebalance; prebuilt shards

lookup table based

Availability

In EC2 you have regions and AZs. AZs are supposed to be “separate” but have some history of going down with each other.

A given region is about 99.2% up historically.

RDS has multi-AZ replica failover

Pure EC2 options:

  • master/replicas – async replication. but, data drift, fragile (need rapid rebuild). MySQL MHA for failover. haproxy (see palomino blog)
  • tungsten – replaced replication and cluster manager. good stuff.
  • galera – galera/xtradb/mariadb synchronous replication

Performance

io/storage: provisioned IOPS.  Also, SSD for ephemeral power replicas

rds has better net perf, the block replication affects speed

instance types – gp, cpu op, memory op, storage op.  Tend to use memory op, EBS op.  cluster and dedicated also available.

EC2 storage – ephemeral, epehemeral SSD (superfast!), EBS slightly slower, EBS PIOPS faster/consistent/expensive/lower fail

Mitigating Failures

Local failures should not be a problem.  AZs, run books, game days, monitoring.

Regional failures – if you have good replication and fast DNS flipping…

You may do master/master but active/active is a myth.

Backups – snap frequently, put some to S3/glacier for long term. Maybe copy them out of Amazon time to time to make your auditors happy.

Cost

Remember, you spend money every minute.  There’s some tools out there to help with this (and Netflix released Ice today to this end).

Leave a comment

Filed under Cloud, Conferences, DevOps