Velocity 2012 Day Two

After John Allspaw and Steve Souders caper about in fake muscles, we get started with the keynotes.

Building for a Billion Users (Facebook)

Jay Parikh, Facebook, spoke about building for a billion users.  Several folks yesterday warned with phrases like “if you have a billion users and hundreds of engineers then this advice may be on point…”

Today in 30 minutes, Facebook did…

  • 10 TB log data into hadoop
  • Scan 105 TB in Hive
  • 6M photos
  • 160m newsfeed stories
  • 5bn realtime messages
  • 10bn profile pics
  • 108bn mysql queries
  • 3.8 tn cache ops

Principle 1: Focus on Impact

Day one code deploys. They open sourced fabricator for code review. 30 day coder boot camp – Ops does coder boot camp, then weeks of ops boot camp. Mentorship program.

Principle 2: Move Fast

  • Commits scale with people, but you have to be safe! Perflab does a performance test before every commit with log-replay. Also does checks for slow drift over time.
  • Gatekeeper is the feature flag tool, A/B testing – 500M checks/sec. We have one of these at Bazaarvoice and it’s super helpful.
  • Claspin is a high density heat map viewer for large services
  • fast deployment technique – used shared memory segment to connect cache, so they can swap out binaries on top of it, took weeks of back end deploys down to days/hours
  • Built lots of ops tools – people | tools | process

Random Bits:

  • Be bold
  • They use BGP in the data center
  • Did we mention how cool we are?
  • Capacity engineering isn’t just buying stuff, it’s APM to reduce usage
  • Massive fail from gatekeeper bug.  Had to pull dns to shut the site down
  • Fix more, whine less

Investigating Anomalies, Amazon

John Rauser, Amazon data scientist, on investigating anomalies.  He gave a well received talk on statistics for operations last year.
He used a very long “data in the time of cholera” example.  Watch the video for the long version.
You can’t just look at the summary, look at distributions, then look into the logs themselves.
Look at the extremes and you’ll find things that are broken.
Check your long tail, monitor percentiles long in the tail.

Building Resilient User Experiences

Mike Brittain, Etsy on Building Resilient User Experiences – here’s the video.

Don’t mess up a page just because one of the 14 lame back end services you compose into it is down. Distinguish critical back end service failures and just suppress other stuff when composing a product page. Consider blocking vs non-blocking ajax. Load critical stuff synchronously and noncritical async/not at all if it times out.
Google apps has the nice problem/retrying in an interval messages (use exponential back off in your ui!)

Planning for 100% availability is not enough; plan for failure.  Your UI should adapt to failure. That’s usually a joint dev+ops+product call.
Do operability reviews, postmortems. Watch your page views for your error template!  (use an error template)

Speeding up the web using prediction of user activity

Arvind Jain/Dominic Hamon from Google – see the video.

Why things would be so much faster if you just preload the next pages a user might go to.  Sure.  As long as you don’t mind accidentally triggering all kinds of shit.  It’s like cross browser request forgery as a feature!
<link rel=’prerender’> to provide guidance.

Annoyed by this talk, and since I’ve read How Complex Systems Fail already, I spent rest of the morning plenary time wandering the vendor floor

It’s All About Telemetry

From the famous and irascible Theo Schlossnagle (@postwait). Here’s the slides.
Monitor what matters! Most new big data problems are created by our own solutions in the first place, and are thus solvable despite their ROI. e.g. logs.

What’s the cost/benefit of your data?
Don’t erode granularity (as with RRD).  It controls your storage but your ability to, say, do YOY black friday compares sucks.
As you zoom in you see obscured major differences and patterns.

There’s a cost/benefit curve for monitoring that goes positive but eventually goes negative.Value=benefit-cost.  So you don’t go to the end of the curve, you want the max difference!

Technique 1: Text
Just store changes- be careful not to have too many changes and store them

Technique 2: Numeric
store rollups, over 1 minute min/max/avg/sttdev/covar/50/95/99%
store first order derivative and then derivative of that (jerkiness)
db replication – lag AND RATE OF LAG CHANGE
“It’s a lot easier to see change when you’re actually graphing change”
Project numbers out.   Graph storage space doing down!
With simple numeric data you can do prediction (Holt-Winters) even hacked into RRD

Technique 3: Histograms
about 2k for a 1 minute histogram (5b for a single bucket)

Event correlation – change mgmt vs performance, is still human eye driven

Monitor everything.

  • the business – financials, marketing, support! problems, custsat, res time
  • operations! durr
  • system/db/yeah
  • middleware! messaging, apis, etc.

They use reconnoiter, statsd, d3, flot for graphing.

This was one of the best sessions of the day IMO.

RUM For Breakfast

Carlos from Facebook on routing users to the closest datacenter with Doppler. Normal DNS geo-routing depends on resolvers being near the user, etc.
They inject js into some browsers and log packet latency.  Map resolver IP to user IP, datacenter, latency. Then you can cluster users to nearest data centers. They use for planning and analysis – what countries have poor latency, where do we need peering agreements
Akamai said doing it was impractical, so they did it.
Amazon has “latency based routing” but no one knows how it works.
Google as part of their standard SOP has proposed a DNS extension that will never be adopted by enough people to work.

Looking at log-normal perf data – look at the whole spread but that can be huge, so filter then analyze.
Margin of error – 1.96*sd/sqrt(num), need less than 5% error.

How does performance influence human behavior?
Very fast sessions had high bounce rates (probably errors)
Strong bounce rate to load time correlation, and esp front end speed
Toxicology has the idea of a “median lethal dose”-  our LD50 is where we pass 50% bounce rate
These are: Back end 1.7s, dom load 1.8s, dom interactive 2.75s, front end 3.5s, dom complete 4.75s, load event 5.5s

Rollback, the Impossible Dream

by James Turnbull! Here’s the slides.

Rollback is BS.  You are insane if you rely on it. It’s theoretically possible, if you apply sufficient capital, and all apps are idempotent, resources…

  • Solve for availability, not rollback. Do small iterative changes instead.
  • Accept that failure happens, and prevent it from happening again.
  • Nobody gives a crap whose fault it was.
  • Assumption is the mother of all fuckups.
  • This can’t be upgraded like that because…  challenge it.

Stability Patterns

by Michael Nygard (@mtnygard) – Slides are here.   For more see his pragmatic programmers book Release It!

Failures come in patterns. Here’s some major ones!

Integrations are the #1 risk to stability, from both “in spec” and “out of spec” errors. Integrations are a necessary evil.  To debug you have to peel back the layers of abstraction; you have to diagnose a couple levels lower than the level an error manifests. Larger systems fail faster than small ones.

Chain Reactions – #2!
Failure moves horizontally across tiers, search engines and app servers get overloaded, esp. with connection pools. Look for resource leaks.

Cascading Failure
Failure moves vertically cross tiers. Common in SOA and enterprise services. Contain the damage – decouple higher tiers with timeouts, circuit breakers.

Blocked Threads
All threads blocked = “crash”. Use java.util/concurrent or system.threading (or ruby/php, don’t do it). Hung request handlers = less capacity and frustrated users. Scrutinize resource pools. Decompile that lame ass third party code to see how it works, if you’re using it you’re responsible for it.

Attacks of Self-Denial
Making your own DoS attack, often via mass unexpected promotions. Open lines of communication with marketers

Unbalanced Capacities
Your environment scaling ratios are different dev to qa to prod. Simulate back end failures or overloading during testing!

Unbounded result sets
Dev/test have smaller data volumes and unreal relationships (what’s the max in prod?). SOA with chatty providers – client can’t trust not being hurt. Don’t trust data producers, put limits in your APIs.

Stability patterns!

Circuit Breaker
Remote call wrapped with a retry loop (always 3!)
Immediate retries are very likely to fail again – TCP fixes packet drops for you man
Makes the user wait longer for their error, and floods the servers (cascading failure)
Count failures (leaky bucket) and stop calling the back end for a cool-off period
Can critical work be queued for later, or rejected, or what?
State of circuit breakers is a good dashboard!

Bulkheads
Partition the system and allow partial failure without losing service.
Classes of customer, etc.
If foo and bar are coupled with baz, then hitting baz can bork both. Make baz pools or whatever.
Less efficient resource use but important with shared-service models

Test Harness
Real-world failures are hard to create in QA
Integration tests don’t find those out-of-spec errors unless you force them
Service acting like a back end but doing crazy shit – slow, endless, send a binary
Supplement testing methods

The Colo or the Cloud?

Ken King/James Sheridan, Yammer

They started in a colo.
Dual network, racks have 4 2U quad and 10 1Us
EC2 provides you: network, cabling, simple load balancing, geo distribution, hardware repair, network and rack diagrams (?)
There are fixed and variable costs, assume 3 year depreciation
AWS gives you 20% off for 2M/year?
Three year commit, reserve instances
one rack one year – $384k cool, $378k ec2, $199k reserved
20 racks one year – $105k colo, $158 ec2 even with reserve and 20% discount
20 racks 3 years – $50k colo, $105k ec2

But – speed (agility) of ramping up…

To me this is like cars.  You build your own car, you know it.  But for arbitrary small numbers of cars, you don’t have time for that crap.  If you run a taxi fleet or something, then you start moving back towards specialized needs and spec/build your own.

Colo benefits – ownership and knowledge.  You know it and control it.

  • Load balancing (loadbalancer.com or zeus/riverbed only cloud options)
  • IDS etc. (appliances)
  • Choose connectivity
  • know your IO, get more RAM, more cores
  • Throw money at vertical scalability
  • Perception of control = security

Cloud benefits – instant. Scaling.

  • Forces good architecture
  • Immediate replacement, no sparing etc.
  • Unlimited storage, snapshots (1 TB volume limit)
  • No long term commitment
  • Provisioning APIs, autoscaling, EU storage, geodist

Hybrid.

  • crocodoc and encoder yammer partners are good to be close to
  • need to burst work
  • dev servers, windows dev, demo servers banished to AWS
  • moving cross connects to ec2 with vpc

Whew!  Man I hope someone’s reading this because it’s a lot of work.  Next, day three and bonus LSPE meeting!

2 Comments

Filed under Conferences, DevOps

Velocity 2012 Day One

Hello all! The Velocity cadre grows as the agile admins spread out.  I’m here with Chris, Larry, and Victor from Bazaarvoice and our new friends Kevin, Bob, and Morgan from Powerreviews which is now Bazaarvoice’s West Coast office; Peco is here with Charlie from Opnet, and James is here… with himself, from Mentor Graphics.  Our old friends from National Instruments Robert, Eric, and Matt are here too. We have quite a Groupme going!

Chris, Peco, James and I were on the same flight, all went well and we ended up at Kabul for a meaty dinner to fortify us for the many iffy breakfasts and lunches to come.  Sadly none of us got into the conference hotel so we were spread across the area.  I’m in the Quality Inn Santa Clara, which is just fine so far (alas, the breakfast is skippable, unlike that place Peco and I always used to stay).

I’m sharing my notes in mildly cleaned up fashion – sorry if it gets incoherent, but this is partially for me and partially for you.

Now it’s time for the first session!  Spoiler alert – it was really, really good and I strongly agree with large swaths of what he has to say.  In retrospect I think this was the best session of Velocity.  It combined high level guidance and tech tips with actionable guidelines. As a result I took an incredible number of notes.  Strap in!

Scaling Typekit: Infrastructure for Startups

by Paul Hammond (@ph) of Typekit, Slides are here: paulhammond.org/2012/startup-infrastructure

Typekit does Web fonts as a service; they were acquired by Adobe early this year. The characteristics of a modern startup are extreme uncertainty and limited money. So this is basically an exercise in effective debt management.

Rule #1 – Don’t run out of money.

Your burn rate is likely # of people on the team * $10k because the people cost is the hugely predominant factor.

Rule #2 – Your time is valuable, Don’t waste it.

He notes the three kinds of startups  – venture funded, bootstrapped, and big company internal.  Sadly he’s not going to talk about big company internal startups, but heck, we did that already at National Instruments so fair enough!  He does say in that case, leverage existing infrastructure unless it’s very bad, then spend effort on making it better instead of focusing on new product ideas.  “Instead of you building a tiny beautiful cloud castle in the corner that gets ignored.” Ouch! The ex-NI’ers look ruefully at each other. Then he discussed startup end states, including acquisition.  Most possible outcomes mean your startup infrastructure will go away at some point. So technical debt is OK, just like normal debt; it’s incurred for agility but like financial must be dealt with promptly.

Look for “excuses” to build the infrastructure you need (business and technical). He cites Small Batch Inc., which did a “How to start a company” conference first thing, forcing incorporation and bank accounts and liability insurance and all that, and then Wikirank, which was not “the product” but an excuse to get everyone working together and learn new tech and run a site as a throwaway before diving into a product. Typekit, in standard Lean Startup fashion, announced in a press release before there was anything to gauge interest, then a funding round, then 6 months later (of 4 people working full time) to get 1.0 out.  Launching a startup is very hard.  Do whatever you can to make it easier.

When they launched their stack was merb/datamapper/resque/mysql/redis/munin/pingdom/chef-solo/ubuntu/slicehost/dynect/edgecast/github/google apps/dropbox/campfire/skype/join.me/every project tracking tool ever.

Now about the tech stack and what worked/didn’t work.

  • Merb is a Web framework like Rails. It got effectively end of lifed and merged into Ruby 3, and to this day they’re still struggling with the transition. Lesson: You will be stuck with your technology choices for a long time.  Choose wisely.
  • Datamapper – a Ruby ORM. Not as popular as ActiveRecord but still going.  Launched on v0.9.11!  Over the long term. many bugs. A 1.0 version came out but it has unknown changes, so they haven’t ported.  The code that stores your data, you need 100% confidence in.  Upgrading to Activerecord was easier because you could do both in parallel.   Lesson: Keep up with upgrades.  Once you’re a couple behind it’s over.
  • Resque – queueing system for Ruby. They love it. Gearman is also a great choice. Lesson: You need a queue – start with one. Retrofitting makes things much harder.
  • Data: MySQL/Redis (and Elasticsearch)
    • MySQL: You have to trust your database like nothing else. You want battle tested, well understood infrastructure here. And scaling mySQL is a solved problem, just read Cal Henderson’s book.
    • Redis: Redis doesn’t do much, which is why it’s awesome.
    • Elasticsearch: Our search needs are small, and elastic search is easy to use.
    • Lessons from their data tier: Choose your technology on what it does today, not promises of the future. They take a couple half hour downtimes a year for schema upgrades. You don’t need 99.999% availability yet as a startup.  Sure, the Facebook/Yahoo/Google presentations about that are so tempting but you/re 4 guys, not them.
  • Monitoring
    • Munin – monitoring, graphing, alerting.  Now collected, nagios and custom code and they hate it.
    • Pingdom is awesome. It’s the service of last resort.
    • Pagerduty is also awesome. Makes sure you get woken up and you know who does.
    • Papertrail is hosted syslog. “It’s not splunk but it’s good enough for our needs.” “But a syslog server is easy to run.  Why use papertrail?” The tools around it are better than what they have time to build themselves.  Hosted services are usually better and cheaper than what you can do yourself.  If there’s one that does what you need, use it.  If it costs less than $70/month buy without thinking about it, because the AWS instance to run whatever open source thingy you were going to use instead costs that much.
    • #monitoringsucks shout-out!  “I don’t know anyone who’s happy with their monitoring that doesn’t have 3-4 full time engineers working on it.”  However, #monitoringsucks isn’t delivering. Every single little open source doohickey you use is something else to go wrong and something they all need to understand.  Nothing is meeting small startups’ needs.  A lot of the hosting ones too, they charge per metric or per host (or both) and that’s discouraging to a startup.  You want to be capturing and graphing as much as you can.
  • Chef – started with chef-solo and rsync; moved to Chef Hosted in 2011 and have been very happy with it.
  • Ubuntu TLS 10.04.  “I don’t thing any startup has ever failed because they picked the wrong Linux distribution.”
  • Slicehost – loved it but then Rackspace shut it down, and the migration sucked – new IPs, hours of downtime. Migrated to Rackspace and EC2. Lots of people are going to bash cloud hosting later at the conference as a waste of money. Counterpoint – “Employees are the biggest cost to a startup.”
  • Start with EC2, period, unless you’re an infra company or totally need super bare metal performance.
  • But – credentials… use IAM to manage them. We use it at BV but it ends up causing a lot of problems too (“So you want your stuff in different IAM accounts to talk to each other like with VPC?  Oh, well, not really supported…”)  Never use the root credentials.
  • Databases in the cloud.  Ephemeral or EBS? Backups? They get a high memory instance, run everything in memory, and then stop worrying about disk IO.  Sha za!  Figure it out later.
  • DynECT – Invisible and fine.
  • Edgecast – cool. CDNs are not created equal, and they have different strengths in regions etc. If you don’t want to hassle with talking to someone on the phone, screw Akamai/Limelight/etc. If you’re not haggling you’re paying too much.  But as a startup, you want click to start, credit card signup. Amazon Cloudfront, Fastly. For Typekit they needed high uptime and high performance as a critical part of the service.  Story time, they had a massive issue with Edgecast as about.me was going live. See Designing for Disaster by Jeff Veen from Velocity Europe. Systems perform in unexpected ways as they grow.  Things have unexpected scaling behavior. Know your escape plan for every infrastructure provider.  That doesn’t have to be “immediate hot backup available,” just a plan.
  • Github – using organizations.
  • Google Apps – yay.  Using Google App Engine for their status page to put it on different infrastructure. They use Stashboard, which we used at NI!

“Buy or build?”

Buy, unless nothing meets your needs.  Then build.  Or if it’s your core business and you’re eating your own dog food.
If it costs more than your annual salary, build it.

A third party provider having an outage is still YOUR problem. Still need a “sorry!” Write your update without naming your service provider.  [You should take responsibility but that seems close to not being transparent to me. -Ed.]  Anyway, buy or build option is “neither” if it’s not needed for the minimum viable product.

You’re not Facebook or Etsy with 100 engineers yet. You don’t need a highly scalable data store.  A half hour outage is OK. You don’t need multi-vendor redundancy, you need a product someone cares about.

Rule #3 – Set up the infrastructure you need.

Rule #4 – Don’t set up infrastructure you don’t need.

Almost every performance problem has been on something they didn’t yet measure.  All their scaling pain points were unexpected.  You can’t plan for everything and the stuff you do plan for may be wasted.

Brain twister: He spent a week to write code to automatically bring up a front end Tomcat server in AWS if one of theirs crashes.  That has never happened in years.  Was that work worth while, does it really meet ROI?

Rule #5 – Don’t make future work for yourself.

There’s a difference between not doing something yet and deliberately setting yourself up for redo.  People talk about “technical debt” but just as in finance, there’s judicious debt and then there’s payday loans. Optimize for change. Every time you grow 10x you’ll need to rewrite. Just make it easy to change.

“You ain’t gonna need it”

Everyone’s startup story:

  1. Find biggest problem
  2. Fix biggest problem
  3. Repeat

The story never reads like:

  1. Up front, plan and build infrastructure based on other companies
  2. Total success!

Minimum Viable Infrastructure for a Startup:

  1. Source control
  2. Configuration management
  3. Servers
  4. Backups
  5. External availability monitoring

So you really could get started with github orgs, rsync/bash, EC2, s3cmd, pingdom, then start improving from there. Well, he’s not really serious you should start that way, he wouldn’t start with rsync again.  But he’s somewhat serious, in that you should really consider the minimum (but good) solution and not get too fancy before you ship.

Watch out for

  • Black swans
  • Vendor lockin
  • Unsupported products
  • Time wasting

Woot! This was a great session, everything from straight dope on specific techs, mistakes made and lessons learned, high level guidance with tangible rules of thumb.

Question and Answer Takeaways:
If you’re going to build, build and open source it to make the ecosystem better
Monitoring – none of them have a decent dashboard. Ganglia, nagios, munin UI sucks.

Intermission

Discussion with Mike Rembetsy and other Etsyans about why JIRA and Confluence are ubiquitously used but people don’t like talking about it.  His theory is that everyone has to hack them so bad that they don’t want to answer 100 questions about “how you made JIRA do that.”

Turning Operational Data Into Gold At Expedia

By Eddie Satterly, previously of Expedia and now with Splunk. This is starting off bad.  I was hoping with Expedia having top billing it was going to be more of a real use case but we’re getting stock splunk vendor pitch.

Eddie Satterly was sr. director of arch at Expedia, now with splunk.  They put 6 TB/day in splunk. Highlights:

  • They built a sdk for cassandra data stores  and archive specific splunks for long term retention to hadoop for batch analysis
  • The big data integration really ramped up the TB/day
  • They do external lookups – geo, ldap, etc.
  • Puppet deploy of the agents/SCCM and gold images
  • A lot of the tealeaf RUM/Omniture Web analytics stuff is being done in splunk now
  • Zenoss integration but moving more to splunk there too
  • Using the file integrity monitoring stuff
  • Custom jobs for unusual volumes and “new errors”

Session was high on generalities; sadly I didn’t really come away with any new insights on splunk from it. Without the sales pitch it could have been a lightning talk.

11 Ways To Hack Puppet For Fun and Productivity

by Luke Kanies. I got here late but all I missed was a puppet overview. Slides on Slideshare.

Examples:
github.com/lak/velocity_2012-Hacking_Puppet
github.com/puppetlabs/puppetlabs-stdlib

  1. Puppet as you.  It doesn’t have to run as root.
  2. Curl speaks.  You can pull catalogs etc. easily, decouple see facts/pull catalog/run catalog/run report.
  3. Data, and lots of it. Catalogs, facts, reports.
  4. Static compiler. Refer to files with checksum instead of URL. And it reduces requests for additional files.
  5. config_version. Find out who made changes in this version.
  6. report processor.
  7. Function
  8. Fact
  9. Types
  10. Providers
  11. Face

Someone’s working on a puppet IDE called geppetto (eclipse based).

I don’t know much puppet yet, so most of this went right by me.

Develop and Test Configuration Management Scripts With Vagrant

By Mitchell Hashimoto from Kiip (@mitchellh). Slides on Speakerdeck.

Sure, you can bring up an ec2 instance and run chef and whatnot, but that gets repetitive. This tempts you to not do incremental systems development, because it takes time and work. So you just “set things up once” and start gathering cruft.

Maybe you have a magic setup script that gets your Macbook all up and running your new killer app. But it’s unlikely, and then it’s not like production.  Requires maintenance, what about small changes… Bah. Or perhaps an uber-readme (read: Confluence wiki page). Naturally prone to intense user error. So, use Vagrant!

We’ll walk through the CLI, VM creation, provisioning, scripted config of vm, network, fs, and setup

Install Virtualbox and Vagrant – All that’s needed are vagrantfile and vagrant CLI
vagrantfile: Per project configuration, ruby DSL
CLI: vagrant <something> e.g “vagrant up”

vagrant box – set up base boxes.  It’s just a single file. “vagrant box add name url”.
Go to vagrantbox.es for more base boxes. They’re big (It’s a vm…)

Project context. “vagrant init <boxtype>” will dump you a file.

“vagrant up” makes a private copy, doesn’t corrupt base box

vagrant up, status, reload, suspend (freeze), halt (shutdown), destroy (delete)

Provides shared folders, NFS to share files host to guest
Shared folder performance degrades with # of files, go to NFS

Provisioning – scripted instal packages, etc.  It supports shell/puppet/chef and soon cfengine.
Use the same scripts as production. vagrant up does utp, but vagrant reload or provision does it in isolation

Networking – port forwarding, host-onlu

port forwarding exposes hosts on the guest via ports on the host, even to the outside.
Simple, over 1024 and open
host only makes a private net of VMs and your host. set IPs or even DHCP it. Beware of IP collisions.
bridge – get IPs from a real router. makes them real boxes, though bad networks won’t do it.

multi vm.  Configure multiple VMs in one file and hook ’em up.  In multi mode you can specify a target on each command to not have it do on all

vagrant package “burns a new AMI” off the current system.
package up installed software, use provisioners for config and managing services

Great for developing and testing chef/puppet/etc scripts. Use prod-quality ops scripts to set up dev env’s, QA. It brings you a nice standard workflow.

Roadmap:

  • other virtualization, vmware, ec2, kvm
  • vagrant builder: ami creator
  • any guest OS

End, Day One!

And we’re done with “Tutorial” day!  The distinction between tutorials and other conference sessions is very weak and O’Reilly would do better to just do a three day conference and right-size people’s presentations – some, like the Typekit one, deserve to be this long.  Others should be a normal conference session and some should be a lightning talk.

Then we went to the Ignites and James and I did Ignite slide karaoke where you have to talk to random slides.  Check out the deck, I got slides 43-47 which were a bit of a tough row to hoe. I got to use my signature phrase “keep your pimp hand strong” however.

1 Comment

Filed under Conferences, DevOps

Puppet and Chef do only half the job

Our first guest post on theagileadmin is by Schlomo Schapiro, Systems Architect and Open Source Evangelist at ImmobilienScout24. I met Schlomo and his colleagues at DevOpsDays and they piqued my interest with their YADT deployment tool they’ve open sourced.  Welcome, Schlomo!

“How do you update your system with OS patches” was one of my favourite questions last week at the Velocity Web Performance and Operations Conference in Santa Clara and at the devopsdays in Mountain View. I tried to ask as many people as would be ready to talk to me about their deployment solutions, most of whom where using one of puppet or chef.

The answers I got ranged from “we build a new image” through “somebody else is doing that” to “I don’t know”. So apparently many people see the OS stack different from their software stack. Personally, I find this very worrying because I strongly believe that one must see the entire stack (OS, software & configuration) as one and also validate it as one.

We see again and again that there is a strong influence from the OS level into the application level. Probably everybody already had to suffer from NFS and autofs and had seen their share of blocked servers. Or maybe how a changed behaviour in threading suddenly makes a server behave completely different under load. While some things like the recent leap second issue are really hard to test, most OS/application interactions can be tested quite well.

For such tests to be really trustworthy the version must be the same between test and production. Unfortunately even a very small difference in versions can be devastating. A specific example we recently suffered from is autofs in RHEL5 and RHEL6 which would die on restart up to the most recent patch. It took us awhile to find out that the autofs in testing was indeed just this little bit newer than in production to actually matter.

If you are using images and not adding any OS patches between image updates, then you are probably on the safe side. If you add patches on top of the image, then you also run a chance that your versions will deviate.

So back to the initial question: If you are using chef, puppet or any other similar tool: How do you manage OS patches? How do you make sure that OS patches and upgrades are tested exactly the same as you test changes in your application and configuration? How do you make sure that you stage them the same? Or use the same rollout process?

For us at ImmobilienScout24 the answer is simple: We treat all changes to our servers exactly the same way without discrimination. The basis for that is that we package all software and configuration into RPM packages and roll them out via YUM channels. In that context it is of course easy to do the same with OS patches and upgrades, they just come from different YUM channels. But the actual change process is exactly the same: Put systems with outstanding RPM updates into maintenance mode, do yum upgrade, start services, run some tests, put systems back into production.

I am not saying that everybody should work that way. For us it works well and we think that it is a very simple way how to deal with the configuration management and deployment issue. What I would ask everybody is to ask yourself how you plan to treat all changes in a server the same way so that the same tests and validation can be applied. I believe that using the same tool to manage all changes makes it much simpler to treat all changes equal than using different tools for different layers of the stack. And if only because a single tool makes it much easier to correlate such changes. Maybe it should be explored more how to use puppet and chef to do the entire job and manage the “lower” layers of the system stack as well as the upper layers.

Are you “doing DevOps”? Then maybe you can look at it like this: If you manage all the stuff on a server the same way it will help you to get everybody onto the same page with regard to OS patches. No more “surprise updates” that catch the developers cold because they are part of all the updates.

Hopefully at the next Velocity somebody will give a talk about how to simplify operations by treating all changes equal.

4 Comments

Filed under Conferences, DevOps

Want To Write For The Agile Admin?

At Velocity and DevOpsDays this week I had a couple people ask about contributing articles to the blog – yes, we’d love to have anyone write on cloud, DevOps, etc. Many folks don’t want to maintain a whole blog themselves (or do and want to crosspromote it, I guess!) so we’re happy to have you here, just email me!

2 Comments

Filed under General

DevOps Illustrated

Naturally, Ops is the Predator and Dev is Ahnuld. We Ops folks are individually fearsome and have cool technology. But the Devs have the numbers on us, and despite speaking somewhat incoherently, win in the end.

1 Comment

Filed under DevOps

DevOps: It’s Not Chef And Puppet

There’s a discussion on the devops Google group about how people are increasingly defining DevOps as “chef and/or puppet users” and that all DevOps is, is using one of these tools.  This is both incorrect and ignorant.

Chef and puppet are individual tools that you can use to implement specific parts of an overall DevOps strategy if you want to use them – but that’s it.  They are fine tools, but do not “solve DevOps” for you, nor are they even the only correct thing to do within their provisioning niche. (One recent poster got raked over the coals for insisting that he wanted to do things a different way than use these tools…)

This confusion isn’t unexpected, people often don’t want to think too deeply about  a new idea and just want the silver bullet. Let’s take a relevant analogy.

Agile.  I see a lot of people implement Scrum blindly with no real understanding of anything in the Agile Manifesto, and then religiously defend every one of Scrum’s default implementation details regardless of its suitability to their environment.  Although that’s better than those that even half ass it more and just say “we’ve gone to our devs coding in sprints now, we must be agile, woot!” Of course they didn’t set up any of the guard rails, and use it as an exclude to eliminate architecture, design, and project planning, and are confused when colossal failure results.

DevOps.  “You must use chef or puppet, that is DevOps, now let’s spend the rest of our time fighting over which is better!” That’s really equivalent to the lowest level of sophistication there in Agile.  It’s human nature, there are people that can’t/don’t want to engage in higher order thought about problems, they want to grab and go.  I kinda wish we could come up with a little bit more of a playbook, like Scrum is for Agile, where at least someone who doesn’t like to think will have a little more guidance about what a bundle of best practices *might* look like, at least it gives hints about what to do outside the world of yum repos.  Maybe Gene Kim/John Willis/etc’s new DevOps Cookbook coming out soon(?) will help with that.

My own personal stab at “What is DevOps” tries to divide up principles, methods, and practices and uses agile as the analogy to show how you have to treat it.  Understand the principles, establish a method, choose practices.  If you start by grabbing a single practice, it might make something better – but it also might make something worse.

Back at NI, the UNIX group established cfengine management of our Web boxes. But they didn’t want us, the Web Ops group, to use it, on the grounds that it would be work and a hassle. But then if we installed software that needed, say, an init script (or really anything outside of /opt) they would freak out because their lovely configurations were out of sync and yell at us. Our response was of course “these servers are here to, you know, run software, not just happily hum along in silence.” Automation tools can make things much worse, not just better.

At this week’s Agile Austin DevOps SIG, we had ~30 folks doing openspaces, and I saw some of this.  “I have this problem.”  “You can do that in chef or puppet!” “Really?”  “Well… I mean, you could implement it yourself… Kinda using data bags or something…” “So when you say I can implement it in chef, you’re saying that in the same sense as ‘I could implement it in Java?'”  “Uh… yeah.” “Thanks kid, you’ve been a big help.”

If someone has a systems problem, and you say that the answer to that problem is “chef” or “puppet,” you understand neither the problem nor the tools. It’s “when you have a hammer, everything looks like a nail – and beyond that, every part of a construction job should be solved by nailing shit together.”

We also do need to circle back up and do a better job of defining DevOps.  We backed off that early on, and as a result we have people as notable as Adrian Cockroft saying “What’s DevOps?  I see a bunch of conflicting blog posts, whatever, I’ll coin my own *Ops term.” That’s on us for not getting our act together. I have yet to see a good concise DevOps definition that is unique if you remove the word DevOps and insert something else (“DevOps helps you bring value to software!  DevOps is about delivering a service to a customer!” s/DevOps/100 other things/).

At DevOpsDays, some folks contended that some folks “get” DevOps and others don’t and we should leave them to their shame and just do our work. But I feel like we have some responsibility to the industry in general, so it’s not just “the elite people doing the elite things.” But everyone agreed the tool focus is getting too much – John Willis even proposed a moratorium on chef/puppet/tooling talks at DevOpsDays Mountain View because people are deviating from the real point of DevOps in favor of the knickknacks too much.

7 Comments

Filed under DevOps

Monitorin’ Ain’t Easy

The DevOps space has been aglow with discussion about monitoring.  Monitoring, much like pimping, is not easy. Everyone does it, but few do it well.

Luckily, here on the agile admin we are known for keeping our pimp hands strong, especially when it comes to monitoring. So let’s talk about monitoring, and how to use it to keep your systems in line and giving you your money!

In November, I posted about why your monitoring is lying to you. It turns out that this is part of a DevOps-wide frenzy of attention to monitoring.

John Vincent (@lusis) started it with his Why Monitoring Sucks blog post, which has morphed into a monitoringsucks github project to catalog monitoring tools and needs. Mainstream monitoring hasn’t changed in oh like 10 years but the world of computing certainly has, so there’s a gap appearing. This refrain was quickly picked up by others (Monitoring  Sucks. Do Something About It.) There was a Monitoring Sucks panel at SCALE last week and there’s even a #monitoringsucks hashtag.

Patrick Debois (@patrickdebois) has helped step into the gap with his series of “Monitoring Wonderland” articles where he’s rounding up all kinds of tools. Check them out…

However it just shows how fragmented and confusing the space is. It also focuses almost completely on the open source side – I love open source and all but sometimes you have to pay for something. Though the “big ol’ suite” approach from the HP/IBM/CA lot makes me scream and flee, there’s definitely options worth paying for.

Today we had a local DevOps meetup here in Austin where we discussed monitoring. It showed how fragmented the current state is.  And we met some other folks from a “real” engineering company, like NI, and it brought to mind how crap IT type monitoring is when compared to engineering monitoring in terms of sophistication.  IT monitoring usually has better interfaces and alerting, but IT monitoring products are very proud when they have “line graphs!” or the holy grail, “histograms!” Engineering monitoring systems have algorithms where they can figure out the difference between a real problem and a monitoring problem.  They apply advanced algorithms when looking at incoming metrics (hint: signal processing).  When is anyone in IT world who’s all delirious about how cool “metrics” going to figure out some math above the community college level?

To me, the biggest gap especially in cloud land – partially being addressed by New Relic and Boundary – in the space is agent based real user monitoring.  I want to know each user and incoming/outgoing transaction, not at the “tcpdump” level but at the meaningful level.  And I don’t want to have to count on the app to log it – besides the fact that devs are notoriously shitful loggers, there are so many cases where something goes wrong – if tomcat’s down, it’s not logging, but requests are still coming in…  Synthetic monitoring and app metrics are good but they tend to not answer most of the really hard questions we get with cloud apps.

We did a big APM (application performance management) tool eval at NI, and got a good idea of the strengths and weaknesses of the many approaches. You end up wanting many/all of them really. Pulling box metrics via SNMP or agents, hitting URLs via synthetic monitors locally or across the Internet, passive network based real user monitoring, deep dive metric gathering (Opnet/AppDynamics/New Relic/etc.)…  We’ll post more about our thoughts on all these (especially Peco, who led that eval and is now working for an APM company!).

Your thoughts on monitoring?  Hit me!

Leave a comment

Filed under DevOps

Service Delivery Best Practices?

So… DevOps.  DevOps vs NoOps has been making the rounds lately. At Bazaarvoice we are spawning a bunch of decentralized teams not using that nasty centralized ops team, but wanting to do it all themselves.  This led me to contemplate how to express the things that Operations does in a way that turns them into developer requirements?

Because to be honest a lot of our communication in the Ops world is incestuous.  If one were to point a developer (or worse, a product manager) at the venerable infrastructures.org site, they’d immediately poop themselves and flee.  It’s a laundry list of crap for neckbeards, but spends no time on “why do you want this?” “What value will this bring to your customer?”

So for the “NoOps” crowd who want to do ops themselves – what’s an effective way to state these needs?  I took previous work – ideas from the pretty comprehensive but waterfally “Systems Development Framework” Peco and I did for NI, Chris from Bazaarvoice’s existing DevOps work – and here’s a cut.  I’d love feedback. The goal is to have a small discrete set of areas (5-7), with a small discrete set (5-7) of “most important” items – most importantly, stated in a way so that people understand that these aren’t “annoying things some ops guy wants me to do instead of writing 3l33t code” but instead stated as they should be, customer facing requirements like any other.  They could then be each broken into subsidiary more specific user stories (probably a whole lot of them, to be honest) but these could stand in as “epics” or “pillars” for making your Web thing work right.

Service Delivery Best Practices

Availability

  • I will make my service highly available, so that customers can use it constantly without issues
  • I will know about any problems with my service’s delivery to customers
  • I can restore my service quickly from any disaster or loss
  • I can make my data resistant to corruption from runtime issues
  • I will test my service under routine disruption to make it rugged and understand its failure modes

Performance

  • I know how all parts of the product perform and can measure performance against a customer SLA
  • I can run my application and its data globally, to serve our global customer base
  • I will test my application’s performance and I know my app’s limitations before it reaches them

Support

  • I will make my application supportable by the entire organization now and in the future
  • I know what every user hitting my service did
  • I know who has access to my code and data and release mechanism and what they did
  • I can account for all my servers, apps, users, and data
  • I can determine the root cause of issues with my service quickly
  • I can predict the costs and requirements of my service ahead of time enough that it is never a limiter
  • I will understand the communication needs of customers and the organization and will make them aware of all upgrades and downtime
  • I can handle incidents large and small with my services effectively

Security

  • I will make every service secure
  • I understand Web application vulnerabilities and will design and code my services to not be subject to them
  • I will test my services to be able to prove to customers that they are secure
  • I will make my data appropriately private and secure against theft and tampering
  • I will understand the requirements of security auditors and expectations of customers regarding the security of our services

Deployment

  • I can get code to production quickly and safely
  • I can deploy code in the middle of the day with no downtime or customer impact
  • I can pilot functionality to limited sets of users
  • I will make it easy for other teams to develop apps integrated with mine

Also to capture that everyone starting out can’t and shouldn’t do all of this… We struggled over whether these were “Goals” or “Best Practices” or what… On the one hand you should only put in e.g. “as much security as you need” but on the other hand there’s value in understanding the optimal condition.

2 Comments

Filed under DevOps

Breaking the Silence – Agile Admin Updates!

Well I know it’s been quiet around here lately.  Two of the agile admins, myself and Peco, have switched jobs and have been crazy busy.

Peco has moved to Opnet as a SE – he always loved their APM tool Panorama and we got a lot of mileage out of it at NI. Notably, this included getting some of our suppliers, like Vignette, to buy it and use it on their products to find the performance problems before we did… We were having Vignette V7 performance issues and used Panorama to identify exactly what they were, and when we fed it back to Vignette engineering they were like “how the hell did you do that…” Opnet’s never had great marketing, they’ve been doing what New Relic and AppDynamics have been doing for a long time but aren’t as much of a recognized name. It’s an interesting move for Peco, he went from long time Ops guy to a Java dev, working on system automation like PIE, and now an SE because he wants trigger time on customer interaction.  He’s a renaissance man!  And working on his own stealth startup on the side of course.

I have moved to Bazaarvoice, a SaaS startup in Austin (well, are you still a startup when  you’re up to 400 people?) to be their Manager of Release Engineering.  They have been increasing their dev and ops forces by a very large margin (P.S. We’re hiring!) and wanted someone to run the “middle third” of their DevOps teams.  There’s one team of DevOps embedded into all the product teams, kind of a matrixed organization; then there’s my team to do the build and deploy automation and cloud and data center stuff; then there’s a support team to wrangle the pages and tickets.

I was there for a week doing new hire training when they said “So we have moved to agile doing two week sprints, and that’s going OK except we realized we can’t release every two weeks safely. Get that going.  We start biweekly releases in three weeks. Zero customer impact each time!” Therefore I’ve been in a little bit of a frenzy doing that. It was definitely on the fine line between “I like a challenge” and “Oh shit.” I lucked out and got a week of slack because we decided to IPO on the date that the first release was supposed to go, and we decided that doing both on the same day was just a wee bit too ambitious. The first biweekly release slipped from a Thursday till the next Tuesday and only had two minor issues that needed fixing after, and the next one is coming up!  We should have it tamped down to regular in a sprint or two and I hope to return to more regular blogging.

On the side of course I continue to help run the Austin Cloud User Group and the Agile Austin DevOps SIG and put together DevOpsDays Austin and do other random stuff, so sadly when the overcommittedness hits the blog has to give.

James is doing great, and just had a new baby!  So he’s been out of pocket some too. He’s still at NI, and now large and in charge of all operations for their SaaS products. They are fighting the brave fight to get PIE open sourced and out the door.

We are all going to SXSW Interactive this weekend and will regroup and decide how we’re going to bring you all the latest in hot cloud/DevOps/tech stuff going forward!

1 Comment

Filed under General

Monitoring Sucks but Alerting is Beautiful

I (@wickett) work as the Cloud Ops Team Lead at National Instruments where we have several Software as a Service products that we have built on different cloud providers (AWS, Azure, Google) and have implemented with a host of other supporting SaaS tools (cloudkick, ZenDesk, AlertSite and PagerDuty plus several others).  When building out our products we decided to eat our own dog food and use SaaS solutions as much as possible.  Great, but where am I going with all this?

No matter what tools you are using to monitor or log on your systems, you need a reliable way to get actionable events to your Ops Team.  If you have implemented several types of monitors, you generally are setting up an email address for them to send alerts to.  Then you write scripts to forward those to on call devices or forwarding rules to turn them into SMS–not exactly state of the art.  Try mixing in a global on call rotation and trying to configure all of your monitoring tools to account for that and it becomes a big problem.

Enter PagerDuty–by far this is the best SaaS product we use in our day-to-day Operations team.  PagerDuty is an alerting tool that is simple, easy-to-use and integrates into your other systems.   Why is PagerDuty so awesome?  Well, I am glad you asked.

  1. User defined escalation.  Once an alert gets sent into PagerDuty, it is processed through our escalation pathway.  It determines who is the first level of support and begins to alert that person.  Here is the cool part, I can choose to be alerted however I want and other ops team members can choose however they want.  For me, I get an email after 1 minute, SMS alerts at 3 minute intervals for the next 9 minutes, then phone calls every 5 minutes for the next 20 minutes.  Lets say you are a hard sleeper then you might want to skip the SMS and move straight to phone calls.  If I don’t acknowledge the alert in 30 minutes, it will get escalated to the next ops team member.
  2. Alert Acknowledgement.  From any one of the alert mechanisms above, there is an in-kind way to acknowledge and resolve the alert.  I can reply to the SMS message with an ACK code or when I get a call from the PagerDuty version of Siri I can select a response right on my keypad at that time.  No time is lost logging into PagerDuty to acknowledge the alert and the ops team can just get busy responding.
  3. Equality of alerts.  This is a subtle one.  We have a policy on our team that all alerts are equal and need to be handled with the same care and diligence.  Anything that makes it to PagerDuty is treated with the same level of importance and is escalated through the same channel–no “you can just ignore those alerts from that system over there” syndrome on my team.  All alerts are escalated and all must be handled.
  4. API integration.  PagerDuty lets you integrate with tools you already use (e.g. nagios, zenoss, cloudkick, splunk) and those tools can open and close alerts as they are detected and/or resolved.
  5. Email integration. Even if you have created some code or monitor that you want to alert from that doesn’t integrate with the API then all you need to be able to do is send an email.  Once that email is received, PagerDuty will treat it as an alert.
  6. Global 24/7 tools.  PagerDuty works in lots of countries and has scheduling that allows for follow-the-sun ops teams to thrive.  I am on call every day for 8 hours and after my shift ends one of my other global ops team members is on call (@einsamsoldat and @hafizramly) for the next 8 hours–at which point I am bumped to the second tier escalation path.  Most tools miss this and for our team this is a huge benefit.

I had initially thought I would just write a quick paragraph or two about why using PagerDuty for alerting is great for your devops team.  But, I am such a big fan that I couldn’t resist coming up with more reasons why we love PagerDuty on our team.  The biggest reason of all is that as an Ops team manager, I can sleep at night knowing that all alerts will get handled and I won’t woken up with a phone call from a VP or marketing person telling me that our cloud products are down–well at least not because of a failure to handle alerts.

I would encourage you to stop using subpar alerting mechanisms in monitoring and logging tools that don’t treat alerting as a first class citizen.  Those tools were’t created to be awesome at alerting they were meant to be detective in nature.  PagerDuty is made for alerting and defintely deserves a spot in your devops toolchain.

 

Leave a comment

Filed under DevOps