Tag Archives: DevOps

Velocity 2013 Day 3: DevOps and metrics

We’re talking about the devops survey with Gene Kim, James Turnbull and Jez Humble

4039 survey responses!

Lessons learned
– Don’t change questions midway in a survey
– Get a data analyst for survey analysis

Key findings
– devops teams are more agile; 30x more deployments, 8000x shorter lead times
– devops teams are more reliable;
– most teams use version control- 89%
– most teams use automated code deployments- 82%
– the longer you do devops, the better you get!

Hilarious John Vincent quote on devops:

20130620-133505.jpg

Measuring culture
Trust and verify

26% of folks who responded to the survey were from the enterprise, and 16% were from 10k and plus.
The biggest barriers to devops was culture because people didn’t get it- whether it was your manager, team or outside the group. Tell people more, and wear more devops shirts!!!

DevOps continues to be a culture issue versus an issue in terms of tools and processes. James Turnbull wants us to go out there and talk to people and figure out people skills!

And join the devops google+ community!

2 Comments

Filed under DevOps

Velocity 2013 Day 3 Liveblog: The Keynotes

Day Three of a week of convention. Convention Humpday.  The day it stops becoming a mini-vacation and you start earning your salary again.

As usual the keynotes are being livestreamed, so this liveblog is perhaps more for those who want the Cliff’s Notes summary later.  Yesterday’s keynotes were certainly compressible, so here we go! Also follow along in twitter hashtag #veocityconf to see what people are saying about the show.

Clarification on Keynote’s RUM – they announced it yesterday but I was like “haven’t they been trying to sell this to me for two years?” Apparently they were but it was in beta. And congrats to fast.ly who just got a $10M round of funding!

Winners of the survey follies…  Souders’ book, Release It!/Web Operations, and so on. Favorite co-host: Souders!

“Send us your comments!  Except about the wi-fi!” Actually it’s working OK so far this morning. Show is good, though they added yet another ‘vendor track’ which is unfortunate.  They have a front end dev track, an ops track, and a mobile track.  Last year they added a fourth track – “vendor talks.”  This year there is another fifth track – “more vendor talks.” Boo, let’s make space for real content.

Gamedays On The Obama Campaign

@dylanr on revamping the Obama site 18 mos. before election day.  40 engineers in 7 teams, ~300 repos, 200 deployed products, 3000 servers (AWS), millions of hits a day, million volunteers, 8000 staff. He had redone threadless’ site and was on to the next big thing!

Plan, build, execute, get out the vote.

Planning is the not-fast dreamtime. But for the tech folks, it means start building the blocks.

Build is when everyone starts building teams and soliciting $. Then tech builds the apps.

Execute is when everyone starts using it all, more and more of everything. Tech starts getting feedback (been building blind till now).

Get out the vote – final 4-day sprint. For tech, this means scale. A couple orders of magnitude in that span.

Funny picture of a “Don’t Fuck This Up” cake.  [Ed: That was my second standing order for my old WebOps team.  1. Make It Happen, 2. Don’t Fuck Up, 3. There’s the right way, the wrong way, and the standard way.]

They got one shot at this. So how do you do it?

Talk to your stakeholders, but they want every feature ever.  But working is better.  No feature is better than having a working app.  So frame the conversation as “if things fail what still needs to work?” Graceful degradation.

Failure – you can try to make it not fail, and learn to deal with failure.  You should do some of the former but not delude yourself into not doing the latter.  But not just the tech – make the team resilient to failure via practice.

“Game day” 6 weeks pre-election.  Prod-size staging, simulate, break and react. Two week hardening sprint and then on game day had a long agenda of “things to break.”  He lied about the order and timing though.

Devops (aggressors) vs engineers (defenders), organized in campfire, maintaining updated google doc

Learned that there were broken things we thought were fixed, learned what failure really looks like, how things fail, how to fix it

Made runbooks

They had a stressful day, went home, came back in – and databases started failing! AWS failure.  Utilized the runbooks and were good.

Build resilient apps by planning for failure, defining what matters, making your plan clear to stakeholders, and fail to the things that matter. And resilient teams – practice failing and learn from it, use your instruction manual.

Ops School

Check it!  Etsy and O’Reilly video classes on  how to be an ops engineer! Oh. They’ll be for sale, I got excited for a minute thinking it would be a free way to get a whole generation of ops engineers trained from schools etc. that know nothing about ops.  Guess not.  Damn.

W3C Web Performance Working Group Status

Arvind from Google gives us an update on the W3C. The web perf working group is working on navigation timing, user timing, page visibility, resource priorities, client side error logging, and other fundamental standards that will help with web performance measurement and improvement.  Good stuff if very boringly presented, but that’s standards groups for you.

Eliminating Web Site Performance Theft

Neustar tells us the world is online and brand reputation and revenue are at stake.  Quite.

Performance can affect your reputation and revenue!  Quite.

This talk is a great one for vaguely befuddled director level and above types, not the experts at this conference.  Twitter agrees.

Mobitest, the Latest

Akamai has cell nodes and devices for Mobitest, you can become a webpagetest node on your phone!  If you have an unlimited data plan 😛 The app is waiting on release in the App Store. See mobitest.akamai.com for more.

If You Don’t Understand People, You Don’t Understand Ops

Go to techleadershipnews.com and get Kate’s newsletter!

Influence – Gangster style!

How do you earn respect and influence without authority? Even if you’re not a “manager” you need to be able to do this to get things done.

You want people to hear what you have to say – need 3 things.

  • Accountability
  • Everyone is your ally
  • Reciprocity

Accountability – lead by example. Be the person who can get “it” done.  Always followed through on commitments. Generates the graph of trust. Treat everyone with respect. Be a reliable person. Be a superstar – always be hustling.

Everyone is an ally – make them your friend. It’s a small world. Who was nice to you? Who made you feel bad?  How about in return? Make every interaction positive.

Reciprocity – all about giving. You get what you give. What is your currency?  What do you have of value with others and how can you share it? How can you improve the lives of other people?

Success is about people. Influence is success.  Yay Kate!

Lightning Demos

httparchive and BigQuery

@igrigorik from Google about httparchive.org which crawls the Web twice a month and keeps stats. It’s all freely available for download – a subset is shown online at the site.

Google built Dremel, which is an interactive ad hoc query system for analysis of read only nested data.  So they put BigQuery + http archive! Go to bigquery.cloud.google.com and it’s in there! Most comon JS framework (jquery btw)? Median page load times? You can query it all.

In Google Docs you can script and make them interoperate (like send an email when this spreadsheet gets filled in).  Created dynamic query straight to bigquery. Oh look, dynamic graph! bit.ly/ha-query for more!

Patrick Meenan on Webpagetest

You can now do a tcpdump with your test! (Advanced tab). He shows an analysis with cloudshark – wireshark in the cloud! Nice.

Patrick Lightbody from New Relic

Real user monitoring is cool. newrelic.com/platform

Steve Reilly from Riverbed/Aptimize

An application aware infrastructure? We have abstractions for some layers – middleware, compute, storage – but not really for transport.  Software defined networking will be the next “washing” trend. It’s just a transport abstraction. Then we can make the infrastructure a function of the application. “Middle boxes” are now app fetures – GSLB, WAFs, etc.

Slightly confusing at this point – a lot of abstract words and not enough concrete.  Which is better than a thinly disguised product pitch, so still better than yesterday!

Decentralized decisionmaking… Location is no longer a constraint but a feature.  This makes me think of Facebook’s talk yesterday with Sonar and rewriting DNS/GTM/LB.

Jonathan LeBlanc from Paypal on API Design

Started with SOAP/XML SOA. But then the enlightenment happened and REST made your life less sucky and devs more efficient.

“Sure we support REST!  You can GET and POST!” Boo. And also religious REST principle following, instead of innovation.

Our lessons learned: Lower perceived latency, use HTTP properly, build in automation, offload complexity.

With no details this was very low value.

 

Leave a comment

Filed under Conferences

Operations Level Up Storify Notes from @wickett

These are some notes from the Operations Level Up talk at the Velocity 2013 Conference. The Agile Admin crew is out at Velocity Conference this year and live-blogging as we go.

Leave a comment

by | June 18, 2013 · 12:47 pm

Must Read: The Phoenix Project

The Phoenix ProjectHave you read the famous systems-management novel The Goal?  No, I know you haven’t, don’t feel bad, I only got to it this year myself.

Well, Gene Kim, entrepreneur, consultant, founder of Tripwire, and general insatiable Tweeter, has written a sequel of sorts his Visible Ops coauthors Kevin Behr and George Spafford.  Bearing the tongue-twisting title The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win, it’s like The Goal in that it’s written as a novel about the people in a large IT shop and how they’re faced will all the usual soul-crushing BS that we all get faced with, but use Lean and DevOps and pluck and courage to overcome it.

I got to be a pre-reader on large chunks of the book and I like it; it definitely has characters and situations directly torn from your IT department. (Man, the security guy… Just like every security guy…) And I’ve seen these techniques work, so I know it’s not just a wish-fulfillment novel.

If you’re wondering how DevOps can help you because “you live in the real world, man,” this is a good read that’ll give you some ideas along those lines! I’ve seen Gene speak at everywhere from AppSec USA to South by Southwest Interactive to DevOpsDays to Velocity… Go read the testimonials from everyone from Cockroft to Humble to even yours truly and then buy the book!

3 Comments

Filed under DevOps, Security

Awesome Austin Events!

Besides the “big one,” DevOpsDays Austin 2013, there’s a bunch of great events going on in Austin for techies.

The Agile Austin DevOps SIG meets every last Wednesday over lunch at Bazaarvoice; lunch is provided.  This month’s meeting on January 30 is Breaking the Barriers.

The Austin Cloud User Group meets every third Tuesday in the evening at Pervasive; dinner is provided. This month’s meeting is January 15, sponsored by Canonical, and there is a talk on Openstack Quantum, the network virtualization platform.

South by Southwest Interactive is here of course on March 8-12.

Hip security event BSides Austin is on March 21-22.

Data Day Austin is on January 29.

Texas Linux Fest will be May 31 – June 1.

It’s never been a better time to be a techie in Austin!

3 Comments

Filed under Cloud, Conferences, DevOps

AppSec USA 2012 Is Here (in Austin)!

AppSec USA 2012, the big OWASP security convention, is here in Austin this year!  And the agile admin’s own @wickett is coordinating it.

“Why do I care if I’m not a security wonk,” you ask? Well, guess what, the security world is waking up and smelling the coffee – this isn’t like security conventions were even just a couple years ago.  There’s a Cloud track and a Rugged DevOps track.

We have like 20 people going from Bazaarvoice. It’s two days, Thursday and Friday (yes, tomorrow – I don’t know why James didn’t post this earlier, sorry) and just $500. So it’s cheap and low impact.

And who’s speaking?  Well, how about Douglas Crockford, inventor of JSON?  And Gene Kim, author of Visible Ops?  That’s not the usual infosec crowd, is it?  Also Michael Howard from Microsoft, Josh Corman from Akamai, a trio of Twitter engineers, Nick Galbreath (formerly of Etsy), Jason Chan from Netflix, Brendan Eich from Mozilla… This is a star-studded techie event that you want to be at!

I’ll be there and will report in…

1 Comment

Filed under Conferences, DevOps, Security

Speeding Up Releases

Hi all!  My new job’s been affording me few opportunities for blogging, but I’m getting into the groove, so you should see more of me now.

Releasing All The Time!

Continuous integration is the bomb.  We can all generally agree on that.  But my life has become one of halfway steps that I think will be familiar to many of you, and I don’t believe in hiding the real world that’s not all case study perfect out there.  So rather than give you the standard theory-list of “what you should do for nice futuristic DevOps releases,” let me tell you of our march from a 10 week to 2 week to 1 week release tempo at Bazaarvoice.

Biweekly Releases!

I started with BV at the start of February of this year. They said, “Our new release manager!  We’ve been waiting for you!  We adopted agile and then tried to move from our big-bang 10 week release cycle to 2 weeks and it blew up like you wouldn’t believe.  Get us to two week releases.  You’ve got a month. Go!”  The product management team really needed us to be able to roll out features more quickly, do piloting and A/B testing, and generally be way more agile in delivery to the customer and not just in dev-land.

Background – our primary application is for the collection and display of user generated content – for example, ratings and reviews – and a lot of the biggest Internet retailers use our solution for that purpose. The codebase started seven years ago and grew monolithically for much of that time . (“The monolith” was the semi-affectionate code name for the stack when I started, as in “is your app’s code on the monolith?”) The app is running across multiple physical and cloud based datacenters and pushing out billions of hits a day, so there’s a low tolerance window for errors – our end user facing display apps have to have zero downtime for releases, though we can do up to two hours of downtime in a 3-5 AM window for customer administrative systems. Stack is Java, Linux, mySQL, Solr, et al. Extremely complex, just like any app added on to for years.

There had been a SWAT team formed after the semi-disastrous 2 week release that identified the main problems.  Just like everywhere else, the main impediments were:

  • Lack of automation in testing
  • Poor SCM code discipline

Our CTO was very invested in solving the problem, so he supported the solution to #1 – the QA team hired up and got some automation folks in, and the product teams were told they had to stop feature development and do several sprints of writing and automating tests until they could sustain the biweekly cadence.

The solution to #2 had two parts.  One was a feature flagging system so we could launch code “dark.” We had a crack team of devs crank this one out. I won’t belabor it because Facebook etc. love to do DevOps presentations on the approach and its benefits, but it’s true.  Now we release all code dark first, and can enable it for certain clients or other segments.

Two was process – a new branching process where a single release branch comes off trunk every two weeks several days before release, and changes aren’t allowed to it except to fix issues found in QA, and those are approved and labeled into discrete release candidates. The dev environment gets trunk twice a day, the QA environment gets branch every time a new release candidate is labeled. Full product CIT must be passing to get a release candidate. As always, process steps like this sound like common sense but when you need 100 developers in 10 teams to uptake them immediately, the little issues come out and play.

There were a couple issues we couldn’t fix in the time allotted.  One was that our Solr indexes are Godawful huge.  Like 20 GB huge.  JVM GC tuning is a popular hobby with us. To make changes, reindex, and distribute the indexes in time to perform a zero-downtime deployment, with replication lag nipping at our heels, was a bigger deal.  The other was that our build and deploy pipeline was pretty bad.  All the keywords you want to hear are there – Puppet, TeamCity, Rundeck, svn, noah, maven/Nexus, yum…  But they are inconsistently implemented, embedded in a huge crufty bash script framework and parts have gone largely untended.

The timeframe was extremely aggressive.  I project managed the hell out of it and all the teams were very on board and helpful, and management was very supportive.  I actually got a slight delay, and was grateful for it, because our IPO date came up on the same date when we were supposed to start biweekly releases, and even the extremely ambitious were taken aback by the risk of cocking up the service on that day. We did our first biweekly release on March 6th and then every two weeks thereafter.  We had a couple rough patches, but they were good learning experiences.

For example, as our first biweekly release day approached, tests just weren’t passing. I brought all the dev managers to the go/no-go meeting (another new institution) and asked them, “are we go?” (The release manager role had been set up by upper management as more prescriptive, with the thought I’d be sitting there yelling at them “It’s no-go,” but that’s really not an effective long term strategy).  They all kinda shuffled, and hemmed, and hawed (a lot of pressure from internal stakeholders wanted this release to go out NOW), but then one manager said “No, we’re no go.  It’s just not safe.” Once she said that everyone else got over that initial taboo of saying “no go” and concurred that some of their areas were no go.  The release went out 5 calendar days late but a lot more smoothly than the last release did (44 major issues then, 5 this time).

The next release, though, was the real make-or-break.  On the one hand everyone had a first real pass through the process and so some of the “but I didn’t know I needed to have testing signoff by that day and time” breaking-in static was gone, but on the other hand they’d had 2 months between the previous two releases to test and plan, and this one allowed only two weeks.  It went off with no delay and only 1 issue.

Of course, we had deliberately sandbagged that a little because it coincided a with ‘test development only” sprint.  But anyone who thinks a complex release in a large scale environment will go smoothly just because you’re deploying code with no functional changes has clearly never been closer than a 10-foot pole to real world Web operations. As we ramped back up on feature development, the process was also becoming more ingrained and testing better, so it went well.

We had one release go bad in May, and when we looked at it we realized a lot of changes weren’t being sufficiently QA’ed.  So what we did was simply add a set of fields to all JIRA tickets for the team to specify who tested the change, and we wrote a script to parse our Subversion commit comments and label JIRA tickets with the appropriate release (trying to get people to actually fill out tickets correctly is pain and usually doomed to failure, so we made an end run with automation).  So then as a release came up, on a wiki page is a list of all the tickets in the release and who tested them and how (automatic, manual, did not test). We actually did this for two releases with paper printouts and physical signoffs to develop the process before we automated it.  This corrected the issue and we ran from then on with very low problem rates. As advertised, releasing fewer changes more frequently allows us to get both a higher throughput of changes and, paradoxically, higher quality with them.

Weekly Releases!

The process worked great through the summer. In the biweekly release communication and presentations, I had explained we’d be moving to weekly and then to continuous deployment as soon as we could make it happen. Well, the solr index distribution problem took a while – two reorgs kicked it around and it was an ambitious “use bittorrent to distribute the index to all the servers in our various DCs” pretty propellerhead kind of thing that had to happen. It took the summer to get that squared away. In the meantime I also conducted a project internally called “Neverland” to fix some of the most egregious technical debt in our TeamCity and Nexus setup and deployment scripts.

The real testament to the culture change that happened as part of the biweekly release project is that while that project was a “big deal” – I had stakeholders from all over the business, big all hands presentations, project plans out the yin-yang, the entire technical leadership team sweating the details – moving from biweekly to weekly releases was largely a non-event.

The QA team worked in the background leading up to it to push test automation levels up higher. Then we basically just said “Hey, you guys want to release faster don’t you?” “Well sure!” “OK. we’re going weekly in two weeks. Check out the updated process docs.” “All right.” And we did, starting the first release in September.  The Solr index got reindexed and redistributed (and man, it had been a while – it compacted down nicely) and deployment ran great. No change in error rate at all. We’ve been weekly since then, the only change  is when we don’t release during critical change freeze windows around Black Friday/Cyber Monday and other holiday prime times. We think our setup is robust enough that it’s safe to release even then, but, heck, no one’s perfect so it’s probably prudent to pause, and many of our clients are really adamant about holiday change freezes to us and to all their suppliers.

The one concern voiced by engineers about the overhead of the release process was addressed by automating it more and by educating.  For example, the go/no-go meeting was, at times, a little messy. Some of the other teams (especially ones not located in Austin) wouldn’t show up, or test signoffs wouldn’t be ready, and it would turn into delays and running around. The opportunity to do it more quickly actually helped a lot! Whereas the meeting had been 30 minutes if we were lucky when we started, now the meeting is taking 5 minutes, and only longer when someone screws around and doesn’t dial into the Webex on time.

“If it’s painful, do it more often” is a message that some folks still balk at when confronted with, but it is absolutely true.

Now, the path wasn’t easy and I was blessed with a very high caliber of people at Bazaarvoice – Dev, Ops, and QA. Everyone was always very focused on “how do we make this work” or “how do we improve this” with very little of the turf warring, blocking, and politics that I sadly have come to expect in a corporate environment. The mindset is very much “if we come up with a new way that’s better and we all agree on that, we will change to do that thing TOMORROW and not spend months dithering about it,” which is awesome and helped drive these changes through much faster than, honestly, I initially estimated it would take.

Releasing All The Time!

Continuous integration on “the monolith” was a distant myth initially, but now we’re seeing how we can get there and the benefits we’ll reap from doing so. Our main impediments remaining are:

1. CIT not passing.  We don’t have a rule where if CIT is failing checkins are blocked, mainly because there’s a bunch of old legacy tests that are flaky. This often results in release milestones being delayed because CIT isn’t passing and there’s 6 devs’ checkins in the last failing build. Step 1 is fix the flaky tests and step 2 is declare work stoppage when CIT is failing. The senior developers see the wisdom in this so I expect it to go down without much friction. Again, the culture is very much about ruthlessly adopting an innovation if the key players agree it will be beneficial.

2. Builds, CIT, and deployment are slow as molasses in January. Build 1 hour, CIT 40 minutes, deploy 3 hours. Why? Various legacy reasons that give me a headache when I have to listen to them. Basically “that’s how it is now, and complete rewrite is potentially beyond any one person’s ability and definitely would take multiple man-months.” We’re analyzing what to do here. We also have a “staging” environment customers use for integration, and so currently we have to deploy to dev, test, deploy to QA, test, deploy to staging (hitting the downtime window), test, deploy to production (hitting the downtime window), test. So basically 2 days minimum. However, staging is really production and step one is release them at the same time.  There’s a couple “but I can only test this kind of change in staging” items left that basically just require telling someone “Figure out how to test it in QA now.” Going to “always release trunk” will remove the whole branch deployment and separate dev and QA environments. So that’s 2 of 4 deployments removed, but then it’s a matter of figuring out cost vs benefit of smashing down parts of that 4:40. I have one proposal in front of me for chucking all the current deploy infrastructure for a Jenkins-driven one, I need to figure out if it is complete enough…

Am I Doing It Wrong?

Chime in in the comments below with questions or if there’s some way I could have cut the Gordian knot better.  I think we’ve moved about as fast as you can given a lot of legacy code and technical debt (and having a lot of other stuff people need to be working on to keep a service up and get out new functionality).   The three step process I used that works, as it does so often, was:

  1. Communicate a clear vision
  2. Drive execution relentlessly
  3. Keep metrics and continually improve

Thanks for reading, and happy releasing!

1 Comment

Filed under DevOps

Velocity 2012 Day Two

After John Allspaw and Steve Souders caper about in fake muscles, we get started with the keynotes.

Building for a Billion Users (Facebook)

Jay Parikh, Facebook, spoke about building for a billion users.  Several folks yesterday warned with phrases like “if you have a billion users and hundreds of engineers then this advice may be on point…”

Today in 30 minutes, Facebook did…

  • 10 TB log data into hadoop
  • Scan 105 TB in Hive
  • 6M photos
  • 160m newsfeed stories
  • 5bn realtime messages
  • 10bn profile pics
  • 108bn mysql queries
  • 3.8 tn cache ops

Principle 1: Focus on Impact

Day one code deploys. They open sourced fabricator for code review. 30 day coder boot camp – Ops does coder boot camp, then weeks of ops boot camp. Mentorship program.

Principle 2: Move Fast

  • Commits scale with people, but you have to be safe! Perflab does a performance test before every commit with log-replay. Also does checks for slow drift over time.
  • Gatekeeper is the feature flag tool, A/B testing – 500M checks/sec. We have one of these at Bazaarvoice and it’s super helpful.
  • Claspin is a high density heat map viewer for large services
  • fast deployment technique – used shared memory segment to connect cache, so they can swap out binaries on top of it, took weeks of back end deploys down to days/hours
  • Built lots of ops tools – people | tools | process

Random Bits:

  • Be bold
  • They use BGP in the data center
  • Did we mention how cool we are?
  • Capacity engineering isn’t just buying stuff, it’s APM to reduce usage
  • Massive fail from gatekeeper bug.  Had to pull dns to shut the site down
  • Fix more, whine less

Investigating Anomalies, Amazon

John Rauser, Amazon data scientist, on investigating anomalies.  He gave a well received talk on statistics for operations last year.
He used a very long “data in the time of cholera” example.  Watch the video for the long version.
You can’t just look at the summary, look at distributions, then look into the logs themselves.
Look at the extremes and you’ll find things that are broken.
Check your long tail, monitor percentiles long in the tail.

Building Resilient User Experiences

Mike Brittain, Etsy on Building Resilient User Experiences – here’s the video.

Don’t mess up a page just because one of the 14 lame back end services you compose into it is down. Distinguish critical back end service failures and just suppress other stuff when composing a product page. Consider blocking vs non-blocking ajax. Load critical stuff synchronously and noncritical async/not at all if it times out.
Google apps has the nice problem/retrying in an interval messages (use exponential back off in your ui!)

Planning for 100% availability is not enough; plan for failure.  Your UI should adapt to failure. That’s usually a joint dev+ops+product call.
Do operability reviews, postmortems. Watch your page views for your error template!  (use an error template)

Speeding up the web using prediction of user activity

Arvind Jain/Dominic Hamon from Google – see the video.

Why things would be so much faster if you just preload the next pages a user might go to.  Sure.  As long as you don’t mind accidentally triggering all kinds of shit.  It’s like cross browser request forgery as a feature!
<link rel=’prerender’> to provide guidance.

Annoyed by this talk, and since I’ve read How Complex Systems Fail already, I spent rest of the morning plenary time wandering the vendor floor

It’s All About Telemetry

From the famous and irascible Theo Schlossnagle (@postwait). Here’s the slides.
Monitor what matters! Most new big data problems are created by our own solutions in the first place, and are thus solvable despite their ROI. e.g. logs.

What’s the cost/benefit of your data?
Don’t erode granularity (as with RRD).  It controls your storage but your ability to, say, do YOY black friday compares sucks.
As you zoom in you see obscured major differences and patterns.

There’s a cost/benefit curve for monitoring that goes positive but eventually goes negative.Value=benefit-cost.  So you don’t go to the end of the curve, you want the max difference!

Technique 1: Text
Just store changes- be careful not to have too many changes and store them

Technique 2: Numeric
store rollups, over 1 minute min/max/avg/sttdev/covar/50/95/99%
store first order derivative and then derivative of that (jerkiness)
db replication – lag AND RATE OF LAG CHANGE
“It’s a lot easier to see change when you’re actually graphing change”
Project numbers out.   Graph storage space doing down!
With simple numeric data you can do prediction (Holt-Winters) even hacked into RRD

Technique 3: Histograms
about 2k for a 1 minute histogram (5b for a single bucket)

Event correlation – change mgmt vs performance, is still human eye driven

Monitor everything.

  • the business – financials, marketing, support! problems, custsat, res time
  • operations! durr
  • system/db/yeah
  • middleware! messaging, apis, etc.

They use reconnoiter, statsd, d3, flot for graphing.

This was one of the best sessions of the day IMO.

RUM For Breakfast

Carlos from Facebook on routing users to the closest datacenter with Doppler. Normal DNS geo-routing depends on resolvers being near the user, etc.
They inject js into some browsers and log packet latency.  Map resolver IP to user IP, datacenter, latency. Then you can cluster users to nearest data centers. They use for planning and analysis – what countries have poor latency, where do we need peering agreements
Akamai said doing it was impractical, so they did it.
Amazon has “latency based routing” but no one knows how it works.
Google as part of their standard SOP has proposed a DNS extension that will never be adopted by enough people to work.

Looking at log-normal perf data – look at the whole spread but that can be huge, so filter then analyze.
Margin of error – 1.96*sd/sqrt(num), need less than 5% error.

How does performance influence human behavior?
Very fast sessions had high bounce rates (probably errors)
Strong bounce rate to load time correlation, and esp front end speed
Toxicology has the idea of a “median lethal dose”-  our LD50 is where we pass 50% bounce rate
These are: Back end 1.7s, dom load 1.8s, dom interactive 2.75s, front end 3.5s, dom complete 4.75s, load event 5.5s

Rollback, the Impossible Dream

by James Turnbull! Here’s the slides.

Rollback is BS.  You are insane if you rely on it. It’s theoretically possible, if you apply sufficient capital, and all apps are idempotent, resources…

  • Solve for availability, not rollback. Do small iterative changes instead.
  • Accept that failure happens, and prevent it from happening again.
  • Nobody gives a crap whose fault it was.
  • Assumption is the mother of all fuckups.
  • This can’t be upgraded like that because…  challenge it.

Stability Patterns

by Michael Nygard (@mtnygard) – Slides are here.   For more see his pragmatic programmers book Release It!

Failures come in patterns. Here’s some major ones!

Integrations are the #1 risk to stability, from both “in spec” and “out of spec” errors. Integrations are a necessary evil.  To debug you have to peel back the layers of abstraction; you have to diagnose a couple levels lower than the level an error manifests. Larger systems fail faster than small ones.

Chain Reactions – #2!
Failure moves horizontally across tiers, search engines and app servers get overloaded, esp. with connection pools. Look for resource leaks.

Cascading Failure
Failure moves vertically cross tiers. Common in SOA and enterprise services. Contain the damage – decouple higher tiers with timeouts, circuit breakers.

Blocked Threads
All threads blocked = “crash”. Use java.util/concurrent or system.threading (or ruby/php, don’t do it). Hung request handlers = less capacity and frustrated users. Scrutinize resource pools. Decompile that lame ass third party code to see how it works, if you’re using it you’re responsible for it.

Attacks of Self-Denial
Making your own DoS attack, often via mass unexpected promotions. Open lines of communication with marketers

Unbalanced Capacities
Your environment scaling ratios are different dev to qa to prod. Simulate back end failures or overloading during testing!

Unbounded result sets
Dev/test have smaller data volumes and unreal relationships (what’s the max in prod?). SOA with chatty providers – client can’t trust not being hurt. Don’t trust data producers, put limits in your APIs.

Stability patterns!

Circuit Breaker
Remote call wrapped with a retry loop (always 3!)
Immediate retries are very likely to fail again – TCP fixes packet drops for you man
Makes the user wait longer for their error, and floods the servers (cascading failure)
Count failures (leaky bucket) and stop calling the back end for a cool-off period
Can critical work be queued for later, or rejected, or what?
State of circuit breakers is a good dashboard!

Bulkheads
Partition the system and allow partial failure without losing service.
Classes of customer, etc.
If foo and bar are coupled with baz, then hitting baz can bork both. Make baz pools or whatever.
Less efficient resource use but important with shared-service models

Test Harness
Real-world failures are hard to create in QA
Integration tests don’t find those out-of-spec errors unless you force them
Service acting like a back end but doing crazy shit – slow, endless, send a binary
Supplement testing methods

The Colo or the Cloud?

Ken King/James Sheridan, Yammer

They started in a colo.
Dual network, racks have 4 2U quad and 10 1Us
EC2 provides you: network, cabling, simple load balancing, geo distribution, hardware repair, network and rack diagrams (?)
There are fixed and variable costs, assume 3 year depreciation
AWS gives you 20% off for 2M/year?
Three year commit, reserve instances
one rack one year – $384k cool, $378k ec2, $199k reserved
20 racks one year – $105k colo, $158 ec2 even with reserve and 20% discount
20 racks 3 years – $50k colo, $105k ec2

But – speed (agility) of ramping up…

To me this is like cars.  You build your own car, you know it.  But for arbitrary small numbers of cars, you don’t have time for that crap.  If you run a taxi fleet or something, then you start moving back towards specialized needs and spec/build your own.

Colo benefits – ownership and knowledge.  You know it and control it.

  • Load balancing (loadbalancer.com or zeus/riverbed only cloud options)
  • IDS etc. (appliances)
  • Choose connectivity
  • know your IO, get more RAM, more cores
  • Throw money at vertical scalability
  • Perception of control = security

Cloud benefits – instant. Scaling.

  • Forces good architecture
  • Immediate replacement, no sparing etc.
  • Unlimited storage, snapshots (1 TB volume limit)
  • No long term commitment
  • Provisioning APIs, autoscaling, EU storage, geodist

Hybrid.

  • crocodoc and encoder yammer partners are good to be close to
  • need to burst work
  • dev servers, windows dev, demo servers banished to AWS
  • moving cross connects to ec2 with vpc

Whew!  Man I hope someone’s reading this because it’s a lot of work.  Next, day three and bonus LSPE meeting!

2 Comments

Filed under Conferences, DevOps

Velocity 2012 Day One

Hello all! The Velocity cadre grows as the agile admins spread out.  I’m here with Chris, Larry, and Victor from Bazaarvoice and our new friends Kevin, Bob, and Morgan from Powerreviews which is now Bazaarvoice’s West Coast office; Peco is here with Charlie from Opnet, and James is here… with himself, from Mentor Graphics.  Our old friends from National Instruments Robert, Eric, and Matt are here too. We have quite a Groupme going!

Chris, Peco, James and I were on the same flight, all went well and we ended up at Kabul for a meaty dinner to fortify us for the many iffy breakfasts and lunches to come.  Sadly none of us got into the conference hotel so we were spread across the area.  I’m in the Quality Inn Santa Clara, which is just fine so far (alas, the breakfast is skippable, unlike that place Peco and I always used to stay).

I’m sharing my notes in mildly cleaned up fashion – sorry if it gets incoherent, but this is partially for me and partially for you.

Now it’s time for the first session!  Spoiler alert – it was really, really good and I strongly agree with large swaths of what he has to say.  In retrospect I think this was the best session of Velocity.  It combined high level guidance and tech tips with actionable guidelines. As a result I took an incredible number of notes.  Strap in!

Scaling Typekit: Infrastructure for Startups

by Paul Hammond (@ph) of Typekit, Slides are here: paulhammond.org/2012/startup-infrastructure

Typekit does Web fonts as a service; they were acquired by Adobe early this year. The characteristics of a modern startup are extreme uncertainty and limited money. So this is basically an exercise in effective debt management.

Rule #1 – Don’t run out of money.

Your burn rate is likely # of people on the team * $10k because the people cost is the hugely predominant factor.

Rule #2 – Your time is valuable, Don’t waste it.

He notes the three kinds of startups  – venture funded, bootstrapped, and big company internal.  Sadly he’s not going to talk about big company internal startups, but heck, we did that already at National Instruments so fair enough!  He does say in that case, leverage existing infrastructure unless it’s very bad, then spend effort on making it better instead of focusing on new product ideas.  “Instead of you building a tiny beautiful cloud castle in the corner that gets ignored.” Ouch! The ex-NI’ers look ruefully at each other. Then he discussed startup end states, including acquisition.  Most possible outcomes mean your startup infrastructure will go away at some point. So technical debt is OK, just like normal debt; it’s incurred for agility but like financial must be dealt with promptly.

Look for “excuses” to build the infrastructure you need (business and technical). He cites Small Batch Inc., which did a “How to start a company” conference first thing, forcing incorporation and bank accounts and liability insurance and all that, and then Wikirank, which was not “the product” but an excuse to get everyone working together and learn new tech and run a site as a throwaway before diving into a product. Typekit, in standard Lean Startup fashion, announced in a press release before there was anything to gauge interest, then a funding round, then 6 months later (of 4 people working full time) to get 1.0 out.  Launching a startup is very hard.  Do whatever you can to make it easier.

When they launched their stack was merb/datamapper/resque/mysql/redis/munin/pingdom/chef-solo/ubuntu/slicehost/dynect/edgecast/github/google apps/dropbox/campfire/skype/join.me/every project tracking tool ever.

Now about the tech stack and what worked/didn’t work.

  • Merb is a Web framework like Rails. It got effectively end of lifed and merged into Ruby 3, and to this day they’re still struggling with the transition. Lesson: You will be stuck with your technology choices for a long time.  Choose wisely.
  • Datamapper – a Ruby ORM. Not as popular as ActiveRecord but still going.  Launched on v0.9.11!  Over the long term. many bugs. A 1.0 version came out but it has unknown changes, so they haven’t ported.  The code that stores your data, you need 100% confidence in.  Upgrading to Activerecord was easier because you could do both in parallel.   Lesson: Keep up with upgrades.  Once you’re a couple behind it’s over.
  • Resque – queueing system for Ruby. They love it. Gearman is also a great choice. Lesson: You need a queue – start with one. Retrofitting makes things much harder.
  • Data: MySQL/Redis (and Elasticsearch)
    • MySQL: You have to trust your database like nothing else. You want battle tested, well understood infrastructure here. And scaling mySQL is a solved problem, just read Cal Henderson’s book.
    • Redis: Redis doesn’t do much, which is why it’s awesome.
    • Elasticsearch: Our search needs are small, and elastic search is easy to use.
    • Lessons from their data tier: Choose your technology on what it does today, not promises of the future. They take a couple half hour downtimes a year for schema upgrades. You don’t need 99.999% availability yet as a startup.  Sure, the Facebook/Yahoo/Google presentations about that are so tempting but you/re 4 guys, not them.
  • Monitoring
    • Munin – monitoring, graphing, alerting.  Now collected, nagios and custom code and they hate it.
    • Pingdom is awesome. It’s the service of last resort.
    • Pagerduty is also awesome. Makes sure you get woken up and you know who does.
    • Papertrail is hosted syslog. “It’s not splunk but it’s good enough for our needs.” “But a syslog server is easy to run.  Why use papertrail?” The tools around it are better than what they have time to build themselves.  Hosted services are usually better and cheaper than what you can do yourself.  If there’s one that does what you need, use it.  If it costs less than $70/month buy without thinking about it, because the AWS instance to run whatever open source thingy you were going to use instead costs that much.
    • #monitoringsucks shout-out!  “I don’t know anyone who’s happy with their monitoring that doesn’t have 3-4 full time engineers working on it.”  However, #monitoringsucks isn’t delivering. Every single little open source doohickey you use is something else to go wrong and something they all need to understand.  Nothing is meeting small startups’ needs.  A lot of the hosting ones too, they charge per metric or per host (or both) and that’s discouraging to a startup.  You want to be capturing and graphing as much as you can.
  • Chef – started with chef-solo and rsync; moved to Chef Hosted in 2011 and have been very happy with it.
  • Ubuntu TLS 10.04.  “I don’t thing any startup has ever failed because they picked the wrong Linux distribution.”
  • Slicehost – loved it but then Rackspace shut it down, and the migration sucked – new IPs, hours of downtime. Migrated to Rackspace and EC2. Lots of people are going to bash cloud hosting later at the conference as a waste of money. Counterpoint – “Employees are the biggest cost to a startup.”
  • Start with EC2, period, unless you’re an infra company or totally need super bare metal performance.
  • But – credentials… use IAM to manage them. We use it at BV but it ends up causing a lot of problems too (“So you want your stuff in different IAM accounts to talk to each other like with VPC?  Oh, well, not really supported…”)  Never use the root credentials.
  • Databases in the cloud.  Ephemeral or EBS? Backups? They get a high memory instance, run everything in memory, and then stop worrying about disk IO.  Sha za!  Figure it out later.
  • DynECT – Invisible and fine.
  • Edgecast – cool. CDNs are not created equal, and they have different strengths in regions etc. If you don’t want to hassle with talking to someone on the phone, screw Akamai/Limelight/etc. If you’re not haggling you’re paying too much.  But as a startup, you want click to start, credit card signup. Amazon Cloudfront, Fastly. For Typekit they needed high uptime and high performance as a critical part of the service.  Story time, they had a massive issue with Edgecast as about.me was going live. See Designing for Disaster by Jeff Veen from Velocity Europe. Systems perform in unexpected ways as they grow.  Things have unexpected scaling behavior. Know your escape plan for every infrastructure provider.  That doesn’t have to be “immediate hot backup available,” just a plan.
  • Github – using organizations.
  • Google Apps – yay.  Using Google App Engine for their status page to put it on different infrastructure. They use Stashboard, which we used at NI!

“Buy or build?”

Buy, unless nothing meets your needs.  Then build.  Or if it’s your core business and you’re eating your own dog food.
If it costs more than your annual salary, build it.

A third party provider having an outage is still YOUR problem. Still need a “sorry!” Write your update without naming your service provider.  [You should take responsibility but that seems close to not being transparent to me. -Ed.]  Anyway, buy or build option is “neither” if it’s not needed for the minimum viable product.

You’re not Facebook or Etsy with 100 engineers yet. You don’t need a highly scalable data store.  A half hour outage is OK. You don’t need multi-vendor redundancy, you need a product someone cares about.

Rule #3 – Set up the infrastructure you need.

Rule #4 – Don’t set up infrastructure you don’t need.

Almost every performance problem has been on something they didn’t yet measure.  All their scaling pain points were unexpected.  You can’t plan for everything and the stuff you do plan for may be wasted.

Brain twister: He spent a week to write code to automatically bring up a front end Tomcat server in AWS if one of theirs crashes.  That has never happened in years.  Was that work worth while, does it really meet ROI?

Rule #5 – Don’t make future work for yourself.

There’s a difference between not doing something yet and deliberately setting yourself up for redo.  People talk about “technical debt” but just as in finance, there’s judicious debt and then there’s payday loans. Optimize for change. Every time you grow 10x you’ll need to rewrite. Just make it easy to change.

“You ain’t gonna need it”

Everyone’s startup story:

  1. Find biggest problem
  2. Fix biggest problem
  3. Repeat

The story never reads like:

  1. Up front, plan and build infrastructure based on other companies
  2. Total success!

Minimum Viable Infrastructure for a Startup:

  1. Source control
  2. Configuration management
  3. Servers
  4. Backups
  5. External availability monitoring

So you really could get started with github orgs, rsync/bash, EC2, s3cmd, pingdom, then start improving from there. Well, he’s not really serious you should start that way, he wouldn’t start with rsync again.  But he’s somewhat serious, in that you should really consider the minimum (but good) solution and not get too fancy before you ship.

Watch out for

  • Black swans
  • Vendor lockin
  • Unsupported products
  • Time wasting

Woot! This was a great session, everything from straight dope on specific techs, mistakes made and lessons learned, high level guidance with tangible rules of thumb.

Question and Answer Takeaways:
If you’re going to build, build and open source it to make the ecosystem better
Monitoring – none of them have a decent dashboard. Ganglia, nagios, munin UI sucks.

Intermission

Discussion with Mike Rembetsy and other Etsyans about why JIRA and Confluence are ubiquitously used but people don’t like talking about it.  His theory is that everyone has to hack them so bad that they don’t want to answer 100 questions about “how you made JIRA do that.”

Turning Operational Data Into Gold At Expedia

By Eddie Satterly, previously of Expedia and now with Splunk. This is starting off bad.  I was hoping with Expedia having top billing it was going to be more of a real use case but we’re getting stock splunk vendor pitch.

Eddie Satterly was sr. director of arch at Expedia, now with splunk.  They put 6 TB/day in splunk. Highlights:

  • They built a sdk for cassandra data stores  and archive specific splunks for long term retention to hadoop for batch analysis
  • The big data integration really ramped up the TB/day
  • They do external lookups – geo, ldap, etc.
  • Puppet deploy of the agents/SCCM and gold images
  • A lot of the tealeaf RUM/Omniture Web analytics stuff is being done in splunk now
  • Zenoss integration but moving more to splunk there too
  • Using the file integrity monitoring stuff
  • Custom jobs for unusual volumes and “new errors”

Session was high on generalities; sadly I didn’t really come away with any new insights on splunk from it. Without the sales pitch it could have been a lightning talk.

11 Ways To Hack Puppet For Fun and Productivity

by Luke Kanies. I got here late but all I missed was a puppet overview. Slides on Slideshare.

Examples:
github.com/lak/velocity_2012-Hacking_Puppet
github.com/puppetlabs/puppetlabs-stdlib

  1. Puppet as you.  It doesn’t have to run as root.
  2. Curl speaks.  You can pull catalogs etc. easily, decouple see facts/pull catalog/run catalog/run report.
  3. Data, and lots of it. Catalogs, facts, reports.
  4. Static compiler. Refer to files with checksum instead of URL. And it reduces requests for additional files.
  5. config_version. Find out who made changes in this version.
  6. report processor.
  7. Function
  8. Fact
  9. Types
  10. Providers
  11. Face

Someone’s working on a puppet IDE called geppetto (eclipse based).

I don’t know much puppet yet, so most of this went right by me.

Develop and Test Configuration Management Scripts With Vagrant

By Mitchell Hashimoto from Kiip (@mitchellh). Slides on Speakerdeck.

Sure, you can bring up an ec2 instance and run chef and whatnot, but that gets repetitive. This tempts you to not do incremental systems development, because it takes time and work. So you just “set things up once” and start gathering cruft.

Maybe you have a magic setup script that gets your Macbook all up and running your new killer app. But it’s unlikely, and then it’s not like production.  Requires maintenance, what about small changes… Bah. Or perhaps an uber-readme (read: Confluence wiki page). Naturally prone to intense user error. So, use Vagrant!

We’ll walk through the CLI, VM creation, provisioning, scripted config of vm, network, fs, and setup

Install Virtualbox and Vagrant – All that’s needed are vagrantfile and vagrant CLI
vagrantfile: Per project configuration, ruby DSL
CLI: vagrant <something> e.g “vagrant up”

vagrant box – set up base boxes.  It’s just a single file. “vagrant box add name url”.
Go to vagrantbox.es for more base boxes. They’re big (It’s a vm…)

Project context. “vagrant init <boxtype>” will dump you a file.

“vagrant up” makes a private copy, doesn’t corrupt base box

vagrant up, status, reload, suspend (freeze), halt (shutdown), destroy (delete)

Provides shared folders, NFS to share files host to guest
Shared folder performance degrades with # of files, go to NFS

Provisioning – scripted instal packages, etc.  It supports shell/puppet/chef and soon cfengine.
Use the same scripts as production. vagrant up does utp, but vagrant reload or provision does it in isolation

Networking – port forwarding, host-onlu

port forwarding exposes hosts on the guest via ports on the host, even to the outside.
Simple, over 1024 and open
host only makes a private net of VMs and your host. set IPs or even DHCP it. Beware of IP collisions.
bridge – get IPs from a real router. makes them real boxes, though bad networks won’t do it.

multi vm.  Configure multiple VMs in one file and hook ’em up.  In multi mode you can specify a target on each command to not have it do on all

vagrant package “burns a new AMI” off the current system.
package up installed software, use provisioners for config and managing services

Great for developing and testing chef/puppet/etc scripts. Use prod-quality ops scripts to set up dev env’s, QA. It brings you a nice standard workflow.

Roadmap:

  • other virtualization, vmware, ec2, kvm
  • vagrant builder: ami creator
  • any guest OS

End, Day One!

And we’re done with “Tutorial” day!  The distinction between tutorials and other conference sessions is very weak and O’Reilly would do better to just do a three day conference and right-size people’s presentations – some, like the Typekit one, deserve to be this long.  Others should be a normal conference session and some should be a lightning talk.

Then we went to the Ignites and James and I did Ignite slide karaoke where you have to talk to random slides.  Check out the deck, I got slides 43-47 which were a bit of a tough row to hoe. I got to use my signature phrase “keep your pimp hand strong” however.

1 Comment

Filed under Conferences, DevOps

Puppet and Chef do only half the job

Our first guest post on theagileadmin is by Schlomo Schapiro, Systems Architect and Open Source Evangelist at ImmobilienScout24. I met Schlomo and his colleagues at DevOpsDays and they piqued my interest with their YADT deployment tool they’ve open sourced.  Welcome, Schlomo!

“How do you update your system with OS patches” was one of my favourite questions last week at the Velocity Web Performance and Operations Conference in Santa Clara and at the devopsdays in Mountain View. I tried to ask as many people as would be ready to talk to me about their deployment solutions, most of whom where using one of puppet or chef.

The answers I got ranged from “we build a new image” through “somebody else is doing that” to “I don’t know”. So apparently many people see the OS stack different from their software stack. Personally, I find this very worrying because I strongly believe that one must see the entire stack (OS, software & configuration) as one and also validate it as one.

We see again and again that there is a strong influence from the OS level into the application level. Probably everybody already had to suffer from NFS and autofs and had seen their share of blocked servers. Or maybe how a changed behaviour in threading suddenly makes a server behave completely different under load. While some things like the recent leap second issue are really hard to test, most OS/application interactions can be tested quite well.

For such tests to be really trustworthy the version must be the same between test and production. Unfortunately even a very small difference in versions can be devastating. A specific example we recently suffered from is autofs in RHEL5 and RHEL6 which would die on restart up to the most recent patch. It took us awhile to find out that the autofs in testing was indeed just this little bit newer than in production to actually matter.

If you are using images and not adding any OS patches between image updates, then you are probably on the safe side. If you add patches on top of the image, then you also run a chance that your versions will deviate.

So back to the initial question: If you are using chef, puppet or any other similar tool: How do you manage OS patches? How do you make sure that OS patches and upgrades are tested exactly the same as you test changes in your application and configuration? How do you make sure that you stage them the same? Or use the same rollout process?

For us at ImmobilienScout24 the answer is simple: We treat all changes to our servers exactly the same way without discrimination. The basis for that is that we package all software and configuration into RPM packages and roll them out via YUM channels. In that context it is of course easy to do the same with OS patches and upgrades, they just come from different YUM channels. But the actual change process is exactly the same: Put systems with outstanding RPM updates into maintenance mode, do yum upgrade, start services, run some tests, put systems back into production.

I am not saying that everybody should work that way. For us it works well and we think that it is a very simple way how to deal with the configuration management and deployment issue. What I would ask everybody is to ask yourself how you plan to treat all changes in a server the same way so that the same tests and validation can be applied. I believe that using the same tool to manage all changes makes it much simpler to treat all changes equal than using different tools for different layers of the stack. And if only because a single tool makes it much easier to correlate such changes. Maybe it should be explored more how to use puppet and chef to do the entire job and manage the “lower” layers of the system stack as well as the upper layers.

Are you “doing DevOps”? Then maybe you can look at it like this: If you manage all the stuff on a server the same way it will help you to get everybody onto the same page with regard to OS patches. No more “surprise updates” that catch the developers cold because they are part of all the updates.

Hopefully at the next Velocity somebody will give a talk about how to simplify operations by treating all changes equal.

4 Comments

Filed under Conferences, DevOps