Tag Archives: velocityconf13

Velocity 2013 Day 2 Liveblog: CSS and gpu cheat sheet

I was headed to the CSS and gpu talk by Colt McAnlis (#perfmatters on twitter)

CSS properties and their paint times aren’t free. Depending on what properties you use, you could end up with slow rendering speeds. Box shadows and border radius strokes are the slowest (1.09ms) per render. That is pretty crazy, and I didn’t realize that it could be that slow.

We’re mostly taking about CSS optimizations that can be used by using the gpu, CPU on chrome.

Kinds of Layering controls
– load time layer promotion: some elements get their own layer by default. (Ex canvas, plugins, video, I frame)
– assign time layer promotion: (translate z, rotatex/y/z)
– animations
– stacking context and relative scrolling

– Too many layers uses additional memory; and you fill up the gpu tile cache.
– chrome prepaints tiles that are visible and not yet visible.

Side note: Colt loves ducks, and is sad about losing his hair 😦

– large images resized take forever. The resized images aren’t cached in the gpu. Think more about this for mobile devices.

Tooling
– turn on show layer borders in devtools in chrome. It’ll help with translate z issues etc.
– use continuous paint mode to continuously paint the page to see

Takeaways
– gpu and layers helps with faster rendering
– too many layers is a bad idea
– CSS tags impact page loads and rendering

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 2 Liveblog – The Keynotes

Had some tasty Afghan food last night and turned in reasonably early to prepare for the deluge today!

So, the keynotes. Steve Souders & John Allspaw kick us off as the MCs. It’s streamed live so  you should be able to watch it (this will let you know what parts to skip… Hint, everything but the Swede.)

The wireless is completely borked.  I’m having to come back to my hotel room over lunch to upload this.  Boo.

Allspaw is rocking a New York shirt.  “New York!” Very light applause, lol.  There’s now a NYC Velocity, London, and China.  Maybe it’s my own MC style talking but there’s not near enough ass jokes.

Allspaw is the philosopher of the group. First night we were here, Gene Kim and I were talking with Marcus from Etsy about him.  Gene: “He’s a philosopher!  He’s a warrior poet!”  Me: “Yep, he sure Yodas that shit up!” Drinks were involved.

Go to bit.ly/VelocityFavorites and vote for your favorite books and stuff!

They also want speaker feedback, give 5 and get a signed O’Reilly book at 6 tonight! Ok, you asked for it…

What, Where And When Is Risk In System Design?

In what turned out to be the best part of all the keynotes, Johan Bergstrom fromn Lund U in Sweden spoke about risk in system design (when will Amazon go down again).

Is risk from unreliable components or from complexity?  Traditional risk evaluation is about determining the likelihood of every single failure event and its impact.

It’s reliable when all the parts work according to the rules; reductionist.

The most unreliable component is the human actor – that’s what gets blamed by AWS etc for outages.Exampleof monetizing tech debt/risk with incremental risk of outage * cost of outage.

So what do we do to mitigate this risk?  Redundant barriers, the defense in depth or “layers of Swiss cheese.”

Or reduce variability by removing humans from the mix. Process and automation.

But what if risk is a product of non-linear interactions and relations (complexity)?

An ecosystem model, hard to completely characterize and barriers may increase interactions.

So risk as a path dependent process and as a control problem.

Path dependency – software is so complex now no one can fully understand, evaluate, or test it.

Technical debt vs normalization of deviance

Control problem.  Have boundaries of unacceptable functionalityrisk, workload, and finances/efficiency. You can only know when you’ve crossed the risk boundary when you’ve passed it.  The other boundaries provide pressure to a least effort/most efficient solution.

risk and safety are both products of performance variability.

So to manage risk in this sense,

Keep talking about risk even when things look safe

  • Invite minority opinion and doubt
  • debate boundaries
  • monitor gap between work as prescribedand performed
  • Focus on how people make the tradeoffs guaranteeing safety

Hollnagel – Safety management is not about avoiding – it is about achieving

Which is it? We ask the wrong question ha ha!

Risk is a game played between values and frames of reference.

Make your values explicit.

slides at jbsafety.se

Keynote

Vik Chaudhary from Keynote for his annual sales pitch

I like Keynote and we’re a Keynote customer, but I like Keynote a little less every time I have to sit through this crap.

 Compuware

Alois Reitbauer on Compuware APM. “We do mobile now!” Another sales pitch.

 

 Obama for America

Kyle Rush on the Obama for America site (dir of tech, new yorker)

Started with small simple site, load balancer to 7 web notes and 2 payment nodes.

Added a reverse proxied payment API

Then went to Jekyll Ruby CMS and github for version control, static in S3

Added Akamai as a CDN, did other front end perf engineering

Much faster and lighter

optimize.ly for A/B testing and faster page had 14% higher conversion rate ($32M)

GTM failover to 2 regions under route 53 round robin

1101 front end deploys, 4k lines js, 240 a/b tests

 

Lightning demos!

Guy (@guypod)  from Akamai on Akamai IO, the Internet Observatory, check out Web-wide stats. Basically their massive Web logs as data graphs.

 

@ManishLachwani from Appurify on their mobile continuous integration and testing platform

Runtime HTML5 and native debugger for mobile.

100k SDK will be free.

 

@dougsillars from AT&T on Application Resource Optimizer (developer.att.com/ARO)

See data flow from app, suggest improvements

Takes pcap traces from mobile, grades against best practices

Nice, like ACE+YSlow for mobile.

 

 Making the Web Faster

Arvind Jain from Google on making the Web faster.

Peak connection speeds have tripled in 5 years

Latency going down, cable 26 ms avg

js speed improvements

But, pages are getting fatter – 1.5 MB average!!!

Net YOY is desktop 5% faster, mobile 30%.

devs will keep adding in till they hit about 3s

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 1 Liveblog – Hands-on Web Performance Optimization Workshop

OK we’re wrapping up the programming on Day 1 of Velocity 2013 with a Hands-on Web Performance Optimization Workshop.

Velocity started as equal parts Web front end performance stuff and operations; I was into both but my path lead me more to the operations side, but now I’m trying to catch up a bit – the whole CSS/JS/etc world has grown so big it’s hard to sideline in it.  But here I am!  And naturally performance guru Steve Souders is here.  He kindly asked about Peco, who isn’t here yet but will be tomorrow.

One of the speakers is wearing a Google Glass, how cute.  It’s the only other one I’ve seen besides @victortrac’s. Oh, the guy’s from Google, that explains it.

@sergeyche (TruTV), @andydavies (Asteno), and @rick_viscomi (Google/YouTube) are our speakers.

We get to submit URLs in realtime for evaluation at man.gl/wpoworkshop!

Tool Roundup

Up comes webpagetest.org, the great Web site to test some URLs. They have a special test farm set up for us, but the abhorrent conference wireless largely prevents us from using it. “It vill disappear like pumpkin vunce it is over” – sounds great in a Russian accent.

YSlow the ever-popular browser extension is at yslow.org.

Google Pagespeed Insights is a newer option.

showslow.com trends those webpagetest metrics over time for your site.

Real Page Tests

Hmm, since at Bazaarvoice we don’t really have pages per se, we’re just embedded in our clients’ sites, not sure what to submit!  Maybe I’ll put in ni.com for old times’ sake, or a BV client. Ah, Nordstrom’s already submitted, I’ll add Yankee Candle for devious reasons of my own.

redrobin.com – 3 A’s, 3 F’s. No excuse for not turning on gzip. Shows the performance golden rule – 10% of the time is back end and 90% is front end.

“Why is my time to first byte slow?”  That’s back end, not front end, you need another tool for that.

nsa.gov – comes back all zeroes.  General laughter.

Gus Mayer – image carousel, but the first image it displays is the very last it loads.  See the filmstrip view to see how it looks over time. Takes like 6 seconds.

Always have a favicon – don’t have it 404. And especially don’t send them 40k custom 404 error pages. [Ed. I’ll be honest, we discovered we were doing that at NI many years ago.] It saves infrastructure cost to not have all those errors in there.

Use 85% lossy compression on images.  You can’t tell even on this nice Mac and it saves so much bandwidth.

sitespeed.io will crawl your whole site

speedcurve is a paid service using webpagetest.

Remember webpagetest is open source, you can load it up yourself (“How can we trust your dirty public servers!?!” says a spectator).

Mobile

webpagetest has some mobile agents

httpwatch for iOS

1 Comment

Filed under Conferences, DevOps

Velocity 2013 Day 1 Liveblog – Using Amazon Web Services for MySQL at Scale

Next up is Using Amazon Web Services for MySQL at Scale. I missed the first bit, on RDS vs EC2, because I tried to get into Choose Your Weapon: A Survey For Different Visualizations Of Performance Data but it was packed.

AWS Scaling Options

Aside: use boto

vertical scaling – tune, add hw.

table level partitioning – smaller indexes, etc. and can drop partitions instead of deleting

functional partitioning (move apps out)

need more reads? add replicas, cache tier, tune ORM

replication lag? see above, plus multiple schemas for parallel rep (5.6/tungsten). take some stuff out of db (timestamp updates, queues, nintrans reads), pre-warm caches, relax durability

writes? above plus sharding

sharding by row range requires frequent rebalancing

hash/modulus based- better distro but harder to rebalance; prebuilt shards

lookup table based

Availability

In EC2 you have regions and AZs. AZs are supposed to be “separate” but have some history of going down with each other.

A given region is about 99.2% up historically.

RDS has multi-AZ replica failover

Pure EC2 options:

  • master/replicas – async replication. but, data drift, fragile (need rapid rebuild). MySQL MHA for failover. haproxy (see palomino blog)
  • tungsten – replaced replication and cluster manager. good stuff.
  • galera – galera/xtradb/mariadb synchronous replication

Performance

io/storage: provisioned IOPS.  Also, SSD for ephemeral power replicas

rds has better net perf, the block replication affects speed

instance types – gp, cpu op, memory op, storage op.  Tend to use memory op, EBS op.  cluster and dedicated also available.

EC2 storage – ephemeral, epehemeral SSD (superfast!), EBS slightly slower, EBS PIOPS faster/consistent/expensive/lower fail

Mitigating Failures

Local failures should not be a problem.  AZs, run books, game days, monitoring.

Regional failures – if you have good replication and fast DNS flipping…

You may do master/master but active/active is a myth.

Backups – snap frequently, put some to S3/glacier for long term. Maybe copy them out of Amazon time to time to make your auditors happy.

Cost

Remember, you spend money every minute.  There’s some tools out there to help with this (and Netflix released Ice today to this end).

Leave a comment

Filed under Cloud, Conferences, DevOps

Velocity 2013 Day 1 Liveblog – Bringing the Noise

Next up it’s the Etsy Crew!  A great bunch of guys.  Rembetsy is cutely nervous and proud about his guys presenting. Slides are available here!

And the topic is Bring the Noise: Making Effective Use of a Quarter Million Metrics by @abestanway and @jonlives. Anomaly detection is hard…

At Etsy we want to deploy lots – we have 250 committers, everyone has to deploy code, coder or not. Big “deploy to production” button. 30 deploys/day.

How can we control that kind of pace? Instead of fearing error, we put in the means to detect and recover quickly.

They use ganglia, graphite, and nagios – and they wrote statsd, supergrep, skyline, and oculus as well.

First line of defense – node daemon tailing log files and looking for errors using supergrep.

But not everything throws errors. 😦

So they use statsd to collect zillions of metrics and put them onto dashboards. But dashboards are manually curated “what’s important” – and if you have .25M metrics you just can’t do that.  So the dashboard approach has fallen over here.  And if no one’s watching the graph, why do you have it?

So that’s why Satan invented Nagios, to alert when you can’t look at a graph, but again it breaks down at scale.

Basically you have unknown anomalies and unknown correlations.

They have “kale,” their monitoring stack to try to solve this – skyline solves anomaly detection and oculus solves metrics correlation.

Skyline

A realtime anomaly detection system (where realtime means ~90s). They have a 10s flush on statsd and a 1 min res on ganglia so that’s still fast.

They had to do this in memory and not disk, using Redis. But how to stream them all in?  They looked around and realized that the carbon-relay on graphite could be used to fork into it by pretending it’s another backup graphite destination.

They import from ganglia too via graphite reading its RRDs. Skyline also has other listeners.

To store time-series data in Redis, minimizing I/O and memory… redis.append() is constant time.

Tried to store in JSON but that was slow (half the CPU time was decoding JSON).

Found Messagepack, a binary-based serialization protocol. Much faster.

So they keep appending, but had to have a process go through and clean up old data past the defined duration. Hence “roomba.py.” Python because of all the good stats libraries. They just keep 24 hours of operational data.

But so what is an anomaly and how do you detect it?

Skyline uses the consensus model. [Ed. This is a common way of distinguishing sensor faults from process faults in real-world engineering.]

Using statistical process control – a metric is anomanouls if its latest datapoint is over three standard deviations above its moving average.

Use “Grubb’s test” and “ordinary least squares”… OK, most of the crowd is lost now. Histogram binning.

Problems – seasonality, spike influence (big spike biases average masking smaller spikes), normality (stddev is for normal distributions, and most data isn’t normal), and parameters. They are trying to further their algorithms.

OK, how about correlations?

Oculus does this.  Can we just compare the graphs? Image comparison is expensive and slow. Numerical comparison is not a hard problem.

“Euclidean Distance” is the most basic comparison of two time series. Dynamic Time Warping helps with phase shifts from time. But that’s expensive – O(n^2).

So how can we discard “obviously” dissimilar data?  Use a shape description alphabet – “basically flat, sharp increment,” etc.  Apply to graphs, cluster using elasticsearch, run dynamic time algorithm on that smaller sample size to polish it. But that’s still slow.  Luckily there’s a fast DTW variant that’s O(n).

So they do an elastic search phrase query with a high slop against the shape fingerprints.
Populate elastic search from redis using resque workers, but it makes it slow to update and search. Solved with rotating pool of elastic search servers – new index/last index. Allows you to purge the index and reindex. They cron-rotate every 2 min. Takes 25s to import, but queries take a while and you don’t want to rotate out from under it.
Sinatra frontend to query ES and render results off the live ES index.

Save collections of interesting correlations and then index those, so that later searches match against current data but also old fingerprints.

Devops is the key to us being able to do this. Abe the dev and Jon the ops guy managed to work all this out in a pretty timely manner.

Demo: Draw your query! He schetched a waveform and it found matching metrics -nice.

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 1 liveblog: Avoiding web performance regression

Avoiding web performance regression

By Marcel Duran (@marcelduran)

Works on the #web-core team at twitter.
Check out #flight

Problem: After a new release, apps get slower sometimes…

Monitoring is a reactive solution to solve performance issues.

Tools used: http archive (har) for yslow, yslow, cuzillion, fiddler, showslow

Har’s can be generated by- fiddler, phantomjs, yslow,
Install yslow locally (needs nodejs)

Ci and cd at yahoo
Crazy amounts of tests but no performance tests…

Phantomjs is a simple repeatable way to test web page performance times amongst other things.

Make performance tests a part of your ci process….

Next up: instead of just having perf tests in your ci process, graduate to a new level by measuring custom metrics on each performance run…

Peregrine is a tool used in twitter based on webpagetest
Peregrine takes code and deploys to performance boxes and integrates with webpagetest to run perf tests.

Peregrin will likely be open sourced soon..

Leave a comment

Filed under DevOps

Velocity 2013 Day 1 Liveblog – Monitoring and Observability

I’m in San Jose, California for this year’s Velocity Conference! James, Karthik and I flew in on the same flight last night.  I gave them a ride in my sweet rental minivan – a quick In-n-Out run, then to the hotel where we ended up drinking and chatting with Gene Kim, James Turnbull, Marcus, Rembetsy, and some other Etsyers, and even someone from our client Nordstrom’s.

Check out our coverage of previous Velocity events – Peco and I have been to every single one.

I always take notes but then don’t have time to go back and clean them up and post them all – so this time I’m just going to liveblog and you get what you get!

Theo Schlossnagle of OmniTI, getting back to his roots by rocking a psycho hillbilly hairstyle, kicked off the first workshop of the day on Monitoring and Observability. The slides are on Slideshare.

Theo Schlossnagle

The talk starts with a bunch of basic term definitions.

  • Observability is about measuring “things” or state changes and not alter things too bad while observing them.
  • A measurement is a single value from a point in time that you can perform operations upon.

“JSON makes all this worse, being the worst encoding format ever.” JSON lets you describe for example arbitrarily large numbers but the implementations that read/write it are inconsistent.

  • A metric is the thing you are measuring.  Version, cost, # executed, # bugs, whatever.

Basic engineering rule – Never store the “rate” of something.  Collect a measurement/timestamp for a given metric and calculate a rate over time.  Direct measurement of rates generates data loss and ignorance.

  • Measurement velocity is the rate at which new measurements are taken
  • Perspective is from where you’re taking the measurement
  • Trending means understanding the direction/pattern of your measurements on a metric
  • Alerting, durr
  • Anomaly detection is determining that a specific measurement is not within reason

All this is monitoring.  Management is different, we’re just going to talk about observation. Most people suck at monitoring and monitor the wrong things and miss the real important things.

Prefer high level telemetry of business KPIs, team KPIs, staff KPIs… Not to say don’t measure the CPU, but it’s more important to measure “was I at work on time?” not “what’s my engine tach?” That’s not “someone else’s job.”

He wrote reconnoiter (open source) and runs Circonus (service) to try to fix deficiencies.

“Push vs pull” is a dumb question, both have their uses. [Ed. In monitoring, most “X vs Y” debates are stupid because both give you different valid intel.]

Why pull?

  • Synthesized obervations desirable (e.g. “URL monitor”)
  • Observable activity infrequent
  • Alterations in observation/frequency are useful

Why push?

  • Direct observation is desirable
  • Discrete observed actions are useful (e.g. real user monitoring)
  • Discrete observed actions are frequent

“Polling doesn’t scale” – false. This is the age where Google scrapes every Web site in the world, you can poll 10,000 servers from a small VM just fine.

So many protocols to use…

  • SNMP can push (trap) and pull (query)
  • collectd v5/v5 push only
  • statsd push only
  • JMX, etc etc etc.

Do it RESTy. Use JSON now.  XML is better but now people stop listening to you when you say “XML” – they may be dolts but I got tired of swimming upstream. PUT/POST for push and GET for pull.

nad – Node Agent Daemon, new open source widget Theo wrote, use this if you’re trying to escape from the SNMP helhole.  Runs scripts, hands back in JSON. Can push or pull. Does SSL. Tiny.

But that’s not methodology, it’s technology. Just wanted to get “but how?” out of the way. The more interesting question is “what should I be monitoring?”  You should ask yourself this before, during, and after implementing your software. If you could only monitor one thing, what would it be?  Hint: probably not CPU. Sure, “monitor all the things” but you need to understand what your company does and what you really need to watch.

So let’s take an example of an ecomm site.  You could monitor if customers can buy stuff from your site (probably synthetic) or if they are buying stuff from your site (probably RUM). No one right answer, has to do with velocity.  1 sale/day for $600k per order – synthetic, want to know capability. 10 sales/minute with smooth trends – RUM, want to know velocity.

We have this whole new field of “data science” because most of us don’t do math well.

Tenet: Always synthesize, and additionally observe real data when possible.

Synthesizing a GET with curl gets you all kinds of stuff – code, timings (first byte, full…), SSL info, etc.

You can curl but you could also use a browser – so try phantomjs. It’s more representative, you see things that block users that curl doesn’t interpret.

Demo of nad to phantomjs running a local check with start and end of load timings.

Passive… Google Analytics, Omniture.  Statsd and Metrics are a mediocre approach here. But if you have lots of observable data, e.g. the average of N over the last X time is not useful. NO RATES I TOLD YOU DON’T MAKE ME STICK YOU! At least add stddev, cardinality, min/max/95th/99th… But these things don’t follow standard distributions so e.g. stddev is deceptive.  If you take 60k API hits and boil it down to 8 metrics you lose a lot.

How do you get more richness out of that data? We use statsd to store all the data and shows histograms. Oh look, it’s a 3-mode distribution, who knew.

A heat map of histograms doesn’t take any more space than a line graph of averages and is a billion times more useful.  Can use some tools, or build in R.

Now we’ll talk about dtrace… Stop having to “wonder” if X is true about your software in production right now. “Is that queue backed up? Is my btree imbalanced?” Instrument your software. It’s easy with DTrace but only a bit more work otherwise.

Use case – they wrote a db called Sauna that’s a metrics db. They can just hit and get a big JSON telemetry exposure with all the current info, rollups, etc.

Monitoring everything is good but make sure you get the good stuff first and then don’t alert on things without specific remediation requirements.

Collect once and then split streams – if you collect and alert in Zabbix but graph in graphite it’s just confusing and crappy.

Tenet: Never make an alert without a failure condition in plain English, the business impact of the failure condition, a concise and repeatable remediation procedure, and an escalation path. That doesn’t have to all be “in the alert” but linking to a wiki or whatever is good.

How to get there? Do alerting postmortems. Understand why it alerted, what was done to fix, bring in stakeholders, have the stakeholder speak to the business impact. [Ed. We have super awful alerting right now and this is a good playbook to get started!]

Q: How do you handle alerts/oncall?  Well, the person oncall is on call during the day too, so they handle 24×7. [Ed. We do that too…]

Q: How does your monitoring system identify the root cause of an issue?  That’s BS, it can’t without AI.  Human mind is required for causation.  A monitoring system can show you highly correlated behavior to guide that determination. Statistical data around a window.

Q: How to set thresholds?  We use lots. Some stock, some Holt-Winters, starting into some Markov… Human train on which algorithms are “less crappy.”

Q: Metrics db? We use a commercial one called Snowth that is cool, but others use cassandra successfully.

Q: How much system performance compromise is OK to get the data? I hate sampling because you lose stuff, and dropping 12 bytes into UDP never hurt anyone… Log to the network, transmit everything, then decide later how to store/sample.

Don’t forget to check out his conference, SURGE.

2 Comments

Filed under Conferences, DevOps

Welcome to Velocity 2013!

All the agile admins – James (@wickett), Peco (@bproverb), and Ernest (@ernestmueller) are reunited at the Web performance and operations conference Velocity again this year!  And with us we have newer cool guys, dev extraordinaire Karthik (@iteration1) from Mentor Graphics and operations muscleman Bryan (@bryguy1211) from Bazaarvoice, and some of our old NI colleagues, Eric and Matt.  Then some of us are staying over for DevOpsDays Mountain View.

So buckle in and experience one of the handiest Web/DevOps conferences by proxy! We’re encouraging everyone to liveblog along here on the agile admin.  I always take notes but always run out of time to prettify and post them after so I’m trying liveblogging in hopes of staying caught up. Comment if you are getting value out of it to encourage us to keep it up!

I hear the Velocity marketing stuff is using one of my quotes from the blog, which is cool; it credits the old defunct webadminblog.com, but we’ve moved here now!

Leave a comment

Filed under Conferences, DevOps