The Agile Admin’s very own Peco Karayanev (@bproverb) gave this talk at Velocity this year. Learn you some monitoring theory!
The Agile Admin’s very own Peco Karayanev (@bproverb) gave this talk at Velocity this year. Learn you some monitoring theory!
Well, it was my first Velocity (I’ve been to every one, 2008 to present, you can read the previous reports here on the blog) as a vendor! So that was different, and I split time between working the Copperegg booth and going to sessions. As a result I’m not going to do the extensive session-by-session notes I’ve done in the past. Two other Agile Admins, James and Karthik were there, I’m hoping they do some writeups of sessions they attended too!
Being a vendor was interesting; though standing at the booth made my dogs bark after the day was over, it was great to be able to talk to so many people. There were a lot of monitoring providers at the show (Copperegg (us), Compuware, New Relic, Datadog, many more). Pingdom was right across from us, with a slate of guys shipped in from Sweden, but they were generally grumpy – jet lag or their recent acquisition, perhaps. A new log management SaaS provider was there, logentries.com, and that was interesting – Sumo is the only real one in the space since Loggly and SplunkStorm borked it up and they’ve been getting a little… “Enterprise-y?” By that I mean having sales reps call you 5x/day and wanting near-Splunk prices. So yay to the newcomers, competition is always good. Other than that, it was mostly the same slate of Velocity-vendors as usual.
Well, let’s get it out of the way – there wasn’t all that much new this year. Karthik complained to me that “last year, Velocity was my favorite conference ever, and this year I didn’t get much out of it.” Not every year hosts a bunch of new techniques, sadly, but I thought there were some gems in there. Here’s the major four new trends taking up speech-space:
Docker docker docker containers containers containers. Learn it now because in a year everything will be in containers – no, seriously. Largest splash in computing since Amazon AWS. The hype is a little overexcited at times but there’s a lot of new development going on here. On the one hand, not everyone needs new-box spinup in 5s instead of 5m and the efficiency gains are a tradeoff for security – but to be blunt, people stopped well short of exercising the elasticity and ephemerality of cloud/virtualization solutions, instead going for the more comfortable “let’s deploy a three tier app manually like we did back in the day, but in the cloud” and so containers will be a disruption to push forward the concept of dynamic service orchestration etc., which is good.
There is starting to be buzz around Internet of Things. Mark Burgess (CFEngine, author of “In Search Of Certainty”) did a presentation on IoT and a more distributed model of monitoring and computation. Worth looking at, and it’s becoming more a part of mainstream computing (“engineering” tech and “IT” tech split off from each other 15 years ago for whatever reason and are just now joining forces again). Since we Agile Admins all had worked at National Instruments and had tried to get them onto the IoT bandwagon like 5 years ago, we grumped among each other about this.
There’s also strong interest in software defined networking (OpenDaylight, Cumulus). John Willis (@botchagalupe) waxed poetic on the topic and it fit into the general push towards making everything programmable.
There was strong and sustained interest (presentations, etc.) on STEM education and specifically on women in tech/getting more women into tech.
Video of these should be publicly available so you can watch them.
Jeff Dean of Google did a very interesting talk on making large scale services low latency that I recommend everyone view (video is at the link). Shared environments increase utilization but also congestion, exacerbated by large fanout systems – if a given system has services with only 1% 1 sec latency and have you to touch 100 services to finish your call, 63% of calls take more than a second. Traditional latency reduction uses techniques like differentiated service classes, breaking up large requests, managing background activity (rate limit, wait till low load). Tolerating faults is a lot like tolerating variability – extra resources make your system reliable – do the same with variability, but much lower timeframe. There’s two ways to do that…
And I did one! Just a 5 minute spot since Copperegg was a platinum sponsor; I talked about applying a Lean approach to implementing monitoring. It was called A 5 Minute Checklist For Application Monitoring and slides/video are at the link. I also wrote a white paper to expand on it that’s available for download here.
I went to a number of sessions that I enjoyed; here’s a quick breakdown of the ones I thought were winners. I’ll try to find slides and link them where they exist. O’Reilly charges for the videos though.
Vladimir Vuskan’s workshop on ganglia. People like the gathering of mass metrics. They did rake him over the coals a bit on the 15s time resolution and the relatively primitive RRDTool graphs. He had some interesting bits like a “check that a value is the same everywhere” alert for consistency. He also summed up “why we monitor” well – MTTD, MTTR, trending, learning.
Theo Schlossnagle’s presentation on Understanding Slowness. He recommended a system map as step 1 – high level box and line but low level with all versions, locations, and service connections. He also talked about going to histograms but less sophisticated users find those hard to understand, so displaying quantiles can be a happy medium. He sees three different tool spaces: observational, synthetic, and manipulation.
There was a good presentation by Dan Slimmon (video of same talk from Monitorama)on the math around false alarms, using the “sensitivity” and “specificity” terms from medicine. Here’s a quick reference on those and how you calculate a positive predictive value. Undetected outages are embarrassing so the response is to narrow the monitoring thresholds but this just generates more false alerts, aka “pagerrhea.” This segued into the discussion of using better means to detect deviation – hysteresis, moving thresholds like Holt-Winters, cross-correlation of metrics, Fourier transforms. You should alert on whether work is getting done, not on CPU or swap but on HTTP response time and requests per second. He wants “something like nagios but that separates detection from diagnosis.”
I also really appreciated the LinkedIn talk on technical debt. They admitted that several years ago, they were trying to keep up in the social world and just ground to a halt because of accumulated technical debt. They had to stop and take a bunch of time to fix it before they could move forward. Important takeaways included:
The last really good one was about confirmation bias and monitoring. When dealing with metrics there are a lot of cognitive illusions – the anchoring effect (whatever it was recently before it deviated must have been right), the validity effect (a couple people told me that so it must be true), illusory correlation (looks like those happened around the same time), attitude polarization (round up the usual suspects). The way to combat this is with analysis. Rethink your data flow, validate your stats. Use anomaly detection like the open sourced skyline and oculus to really detect correlations and deviations.
Though there weren’t as many breakthroughs this year, I appreciated the incremental uptick in wisdom about how to use what we have!
Much of the benefit of conferences isn’t the sessions, it’s the great people you meet and share experiences with. Once you’ve been a couple years, you get to see old friends – though sadly none of our compatriots from Agile Admin alumni companies were there (National Instruments, Bazaarvoice, PowerReviews) we did get to see most of the “usual suspects” we get to see at these shows – we had the usual “hang out at the Hyatt bar fiesta” with Andrew Schafer, John Willis, Ben Rockwood, Cameron Haight and Jonah Kowall from Gartner, Gene Kim, and many more. Notable in his absence was Patrick Debois who remained in Belgium, we all missed him.
If you went to Velocity this year, chime in below (especially if we met you there!).
All right, I’m here in sunny San Jose for Velocity, the three-day Web operations and performance conference. It’s my first time attending as a sponsor type which is interesting. We have a whole cadre going; I flew in with Jenny and Lauren from Copperegg as part of the advance squad. Because I just got in on this gig recently, I am out at the Avatar while they’re at the Hilton nearby. On the cab ride, they got a bit agitated over a tweet claiming we’re being exclusionary over our “The Dude” promos; I guess I can see the misunderstanding but it’s a Big Lebowski theme specifically cooked up by the women in our Marketing department.
Some IHOP breakfast, a long walk from the Avatar to the convention center, and then speaker checking, where I got to chat with Mandy Walls, Vladimir Vuskan, and Andrew “Clay” Shafer. Apparently there’s a two person limit on booth setup so I don’t have to help with that. I’ll go report on Andrew’s talk, though will have to duck out early for speaker orientation for my talk.
Remember, if you can’t make it they’ll be streaming the morning keynotes on Wed/Thurs. If you are here, grab me and say “Hi!”
Three of the four agile admins (James, Karthik, and myself) will be out at Velocity and DevOpsDays this week. Say hi if you see us!
James will be doing a workshop with Gareth Rushgrove on Tuesday 9-10:30 AM, “Battle-tested Code without the Battle – Security Testing and Continuous Integration.” Get hands on with gauntlt and other tools! [Conference site] [Lanyrd]
Ernest is doing a 5 minute sponsor keynote on Thursday, “A 5 Minute Checklist for Application Monitoring.” OK, so it’s during the USA vs Germany game – come see me anyway! I hate keynote sales pitches so I’m not doing one, I’ll be talking about a Lean approach to monitoring and stuff to cover in your MVP. There’s a free white paper too since what can you really say in 5 minutes? And so you know what to expect, the hashtag you’ll want to use is #getprobed! [Conference site] [Lanyrd]
Whew, we’re all finally back home from the conferencing. Fun was had by all.
Over the next week I’ll go back to the liveblog articles and put in links to slides/videos where I can find them (feel free and post ones you know in comments on the appropriate post!). We’ll also try to sum up the best takeaways into a Velocity 2013 and DevOpsDays Silicon Valley 2013 quick guide, for those without the patience to read the extended dance remix.
Srinivas Peri, Adobe and Alex Honor, SimplifyOPS/DTO
Adobe needed to move from desktop, packaged software to a cloud services model and needed a DevOps transformation as well.
Srini’s CoreTech Tools/Infrastructure group tries to transform wasted time to value time (enabling tools).
So they started talking SaaS and Srini went around talking to them about tooling.
Dan Neff came to Adobe from Facebook as operations guru from Facebook. He said “let’s stop talking about tools.” He showed him the 10+ deploys a day at Flickr preso. Time to go to Velocity! And he met Alex and Damon of DTO and learned about loosely coupled toolchains.
They generated CDOT, a service delivery platform. Some teams started using it, then they bought Typekit and Paul Hammond thought it was just lovely.
And now all Adobe software is coming through the cloud. They are not the CoreTech Solution Engineering team – who makes enabling services.
Do something next week! And don’t reinvent the wheel.
First problem to solve. There are islands of tools – CM, package, build, orchestration, package repos, source repos. Different teams, different philosophies.
And actually, probably in each business unit, you have another instantiation of all of the above.
CDOT – their service delivery platform, the 30k foot view
Many different app architectures and many data center providers (cloud and trad). CDOT bridges the gap.
CDOT has a UI and API service atop an integration layer It uses jenkins, rundeck, chef, zabbix, splunk under the covers.
On the code side – what is that? App code, app config, and verification code. But also operations code! It is part of YOUR product. It’s an input to CDOT.
So build (CI). Takes from perforce/github to pk/jenkins, into moddav/nexus, for cloud stuff bake to an AMI, promote packages to S3 and AMIs to an AMI repo.
For deploy (CD), jenkins calls rundeck and chef server. Rundeck instantiates the cloudformation or whatever and does high level orchestration, the AMis pull chef recipes and packages from S3, and chef does the local orchestration. Is it pull or push? Both/either. You can bake and you can fry.
So feature branches – some people don’t need to CD to prod, but they sure do to somewhere. So devs can mess with feature branches on dev boxes, but then all master checkins CD to a CD environment. You can choose how often to go to prod.
Have a cool “devops workbench” UI with the deployment pipeline and state. So everyone has one-click self service deployment with no manual steps, with high confidence.
Now, CDOT video! It’s not really for us, it’s their internal marketing video to get teams to uptake CDOT. Getting people on board is most of the effort!
What’s the value prop?
Bring testimonials, data, presentations, do events, videos! Sell it!
“Get out of your cube and go talk to people”
Think like a salesperson. Get users (devs/PMs) on board, then the buyers (managers/budget folks), partners and suppliers (other ops guys).
Got here late! By Jonathan Reichhold (@jreichhold) from Twitter.
“Facebook is for useless posts, Twitter is for making fun of celebrities, and Instagram is for young people.” -My 11 year old
Step 2: Set Expectations
set expectations for times of failure–set communication methods, test your escalation tree
Be realistic & ambitious. Prioritize what can be fixed and fix it in its due time
Postmortems – improvement has to be part of the process.
Teamwork – management has to support site reliability as a feature, burn out your ops guys
Distributed systems fail – have to be robust against things that don’t happen “a lot” at small scale. A 1 in 1,000,000 issue is EVERY DAMN MINUTE at scale. Design more robust
Large systems take time to design, stabilize in prod.
Don’t assume. Be rigorous and vigilant.
Degrade gracefully, shed load
Don’t “learn bad lessons” from retrospectives like “never touch the X!”
Capacity planning – do it just in time but be realistic. Figure out real buffers. “Facebook with their huge custom datacenters is all nice but that’s not us.”
Hardware has lead time. [Ed: That’s why it’s for punks]
This is a marathon not a sprint. You have to keep yourself healthy or you’ll crash. Maintain your systems and yourself.