Operations Level Up Storify Notes from @wickett

These are some notes from the Operations Level Up talk at the Velocity 2013 Conference. The Agile Admin crew is out at Velocity Conference this year and live-blogging as we go.

Leave a comment

by | June 18, 2013 · 12:47 pm

Velocity 2013 Day 1 Liveblog – Monitoring and Observability

I’m in San Jose, California for this year’s Velocity Conference! James, Karthik and I flew in on the same flight last night.  I gave them a ride in my sweet rental minivan – a quick In-n-Out run, then to the hotel where we ended up drinking and chatting with Gene Kim, James Turnbull, Marcus, Rembetsy, and some other Etsyers, and even someone from our client Nordstrom’s.

Check out our coverage of previous Velocity events – Peco and I have been to every single one.

I always take notes but then don’t have time to go back and clean them up and post them all – so this time I’m just going to liveblog and you get what you get!

Theo Schlossnagle of OmniTI, getting back to his roots by rocking a psycho hillbilly hairstyle, kicked off the first workshop of the day on Monitoring and Observability. The slides are on Slideshare.

Theo Schlossnagle

The talk starts with a bunch of basic term definitions.

  • Observability is about measuring “things” or state changes and not alter things too bad while observing them.
  • A measurement is a single value from a point in time that you can perform operations upon.

“JSON makes all this worse, being the worst encoding format ever.” JSON lets you describe for example arbitrarily large numbers but the implementations that read/write it are inconsistent.

  • A metric is the thing you are measuring.  Version, cost, # executed, # bugs, whatever.

Basic engineering rule – Never store the “rate” of something.  Collect a measurement/timestamp for a given metric and calculate a rate over time.  Direct measurement of rates generates data loss and ignorance.

  • Measurement velocity is the rate at which new measurements are taken
  • Perspective is from where you’re taking the measurement
  • Trending means understanding the direction/pattern of your measurements on a metric
  • Alerting, durr
  • Anomaly detection is determining that a specific measurement is not within reason

All this is monitoring.  Management is different, we’re just going to talk about observation. Most people suck at monitoring and monitor the wrong things and miss the real important things.

Prefer high level telemetry of business KPIs, team KPIs, staff KPIs… Not to say don’t measure the CPU, but it’s more important to measure “was I at work on time?” not “what’s my engine tach?” That’s not “someone else’s job.”

He wrote reconnoiter (open source) and runs Circonus (service) to try to fix deficiencies.

“Push vs pull” is a dumb question, both have their uses. [Ed. In monitoring, most “X vs Y” debates are stupid because both give you different valid intel.]

Why pull?

  • Synthesized obervations desirable (e.g. “URL monitor”)
  • Observable activity infrequent
  • Alterations in observation/frequency are useful

Why push?

  • Direct observation is desirable
  • Discrete observed actions are useful (e.g. real user monitoring)
  • Discrete observed actions are frequent

“Polling doesn’t scale” – false. This is the age where Google scrapes every Web site in the world, you can poll 10,000 servers from a small VM just fine.

So many protocols to use…

  • SNMP can push (trap) and pull (query)
  • collectd v5/v5 push only
  • statsd push only
  • JMX, etc etc etc.

Do it RESTy. Use JSON now.  XML is better but now people stop listening to you when you say “XML” – they may be dolts but I got tired of swimming upstream. PUT/POST for push and GET for pull.

nad – Node Agent Daemon, new open source widget Theo wrote, use this if you’re trying to escape from the SNMP helhole.  Runs scripts, hands back in JSON. Can push or pull. Does SSL. Tiny.

But that’s not methodology, it’s technology. Just wanted to get “but how?” out of the way. The more interesting question is “what should I be monitoring?”  You should ask yourself this before, during, and after implementing your software. If you could only monitor one thing, what would it be?  Hint: probably not CPU. Sure, “monitor all the things” but you need to understand what your company does and what you really need to watch.

So let’s take an example of an ecomm site.  You could monitor if customers can buy stuff from your site (probably synthetic) or if they are buying stuff from your site (probably RUM). No one right answer, has to do with velocity.  1 sale/day for $600k per order – synthetic, want to know capability. 10 sales/minute with smooth trends – RUM, want to know velocity.

We have this whole new field of “data science” because most of us don’t do math well.

Tenet: Always synthesize, and additionally observe real data when possible.

Synthesizing a GET with curl gets you all kinds of stuff – code, timings (first byte, full…), SSL info, etc.

You can curl but you could also use a browser – so try phantomjs. It’s more representative, you see things that block users that curl doesn’t interpret.

Demo of nad to phantomjs running a local check with start and end of load timings.

Passive… Google Analytics, Omniture.  Statsd and Metrics are a mediocre approach here. But if you have lots of observable data, e.g. the average of N over the last X time is not useful. NO RATES I TOLD YOU DON’T MAKE ME STICK YOU! At least add stddev, cardinality, min/max/95th/99th… But these things don’t follow standard distributions so e.g. stddev is deceptive.  If you take 60k API hits and boil it down to 8 metrics you lose a lot.

How do you get more richness out of that data? We use statsd to store all the data and shows histograms. Oh look, it’s a 3-mode distribution, who knew.

A heat map of histograms doesn’t take any more space than a line graph of averages and is a billion times more useful.  Can use some tools, or build in R.

Now we’ll talk about dtrace… Stop having to “wonder” if X is true about your software in production right now. “Is that queue backed up? Is my btree imbalanced?” Instrument your software. It’s easy with DTrace but only a bit more work otherwise.

Use case – they wrote a db called Sauna that’s a metrics db. They can just hit and get a big JSON telemetry exposure with all the current info, rollups, etc.

Monitoring everything is good but make sure you get the good stuff first and then don’t alert on things without specific remediation requirements.

Collect once and then split streams – if you collect and alert in Zabbix but graph in graphite it’s just confusing and crappy.

Tenet: Never make an alert without a failure condition in plain English, the business impact of the failure condition, a concise and repeatable remediation procedure, and an escalation path. That doesn’t have to all be “in the alert” but linking to a wiki or whatever is good.

How to get there? Do alerting postmortems. Understand why it alerted, what was done to fix, bring in stakeholders, have the stakeholder speak to the business impact. [Ed. We have super awful alerting right now and this is a good playbook to get started!]

Q: How do you handle alerts/oncall?  Well, the person oncall is on call during the day too, so they handle 24×7. [Ed. We do that too…]

Q: How does your monitoring system identify the root cause of an issue?  That’s BS, it can’t without AI.  Human mind is required for causation.  A monitoring system can show you highly correlated behavior to guide that determination. Statistical data around a window.

Q: How to set thresholds?  We use lots. Some stock, some Holt-Winters, starting into some Markov… Human train on which algorithms are “less crappy.”

Q: Metrics db? We use a commercial one called Snowth that is cool, but others use cassandra successfully.

Q: How much system performance compromise is OK to get the data? I hate sampling because you lose stuff, and dropping 12 bytes into UDP never hurt anyone… Log to the network, transmit everything, then decide later how to store/sample.

Don’t forget to check out his conference, SURGE.

2 Comments

Filed under Conferences, DevOps

Welcome to Velocity 2013!

All the agile admins – James (@wickett), Peco (@bproverb), and Ernest (@ernestmueller) are reunited at the Web performance and operations conference Velocity again this year!  And with us we have newer cool guys, dev extraordinaire Karthik (@iteration1) from Mentor Graphics and operations muscleman Bryan (@bryguy1211) from Bazaarvoice, and some of our old NI colleagues, Eric and Matt.  Then some of us are staying over for DevOpsDays Mountain View.

So buckle in and experience one of the handiest Web/DevOps conferences by proxy! We’re encouraging everyone to liveblog along here on the agile admin.  I always take notes but always run out of time to prettify and post them after so I’m trying liveblogging in hopes of staying caught up. Comment if you are getting value out of it to encourage us to keep it up!

I hear the Velocity marketing stuff is using one of my quotes from the blog, which is cool; it credits the old defunct webadminblog.com, but we’ve moved here now!

Leave a comment

Filed under Conferences, DevOps

DevOpsDays Austin 2013 Is Upon Us

Well, I hope you got a ticket for next week’s event because we’re sold out, sponsor slots sold out, all filled up with volunteers, the train has left the station.  It’s going to be a sweet ride.  Check out the program – Patrick Debois, John Willis, Gene Kim, Nick Galbreath, and many more will be speaking.

We have a sweet venue, the Marchesa; and on the first evening, Tuesday, you should free up your night.  We’ve got a happy hour from Dell, the band Lord Buffalo is playing, and then the Austin Film Society will be doing a private screening of Office Space for us! Then at 10 if you’re still rarin’ to go we can hook you up. We’re providing breakfast and lunch both days; expect breakfast tacos, barbecue, all the Texas standbys.

Of the agile admins, James and I have been working (along with the many other great volunteers who put many hours into putting together the event) to make it a fun and informative time for everyone who’s signed up!  Come early, leave late!

Leave a comment

Filed under Conferences, DevOps

Chef your haproxy load balancer and add encryption

As of last September, HAProxy supports ssl so you no longer have to put stud/stunnel/nginx in front of HAProxy and it can also connect to SSL on backend servers so you can have encrypted traffic the whole way to the app server.  Most people decrypt on the load balancer and then pass it to their app servers unencrypted but I am not a big fan of that architecture.  This post shows you how to set up HAProxy with chef and we will be setting up ssl all the way to the app servers. Big thanks to @jtimberman for his post on encrypted data bags which helped me figure this out.

Setup your Chef encrypted data bag to store your ssl cert

The first step is to create a secret key for your data bag to use. This will be used to encrypt your data bag and later by chef nodes to decrypt the data bag so that they can read from the data bag. Do not store the encrypted_data_bag_secret in source control as-is. Instead, you can put this into a keepass database and then store that in source control if you want to.

openssl rand -base64 512 > ~/.chef/encrypted_data_bag_secret

Next you have to create the databag which we have aptly called secrets

knife data bag create secrets

Now we can store our wildcard cert in the secrets databag. This will open an editor and you can copy and paste your cert and key into it. This will go to the chef server and not on local disk. I set these id, cert and key.

knife data bag create secrets wildcard --secret-file ~/.chef/encrypted_data_bag_secret

The last step uploaded your wildcard cert to the chef server and encrypted it.   The next step allows us to save off the json export of our encrypted wildcard cert which we can check into source control and version.  Later if we get in a bind we can tell chef to import the databag using this json export.

mkdir data_bags/secrets
knife data bag show secrets wildcard -Fj > data_bags/secrets/wildcard.json

This next step is to just do a sanity check to make sure the databag export looks good.  It should look like this:

cat data_bags/secrets/wildcard.json { "id": "wildcard", "cert": "encrypted string here", "key": "encrypted string here" }

Create your own wrapper cookbook

Now in your chef cookbook you can access this wildcard cert. The next step requires you to write your own wrapper cookbook which doesn’t do very much other than set default attributes, pull the wildcard cert from the databag, write it to a file and then call the haproxy cookbook to do the install.  (My cookbook for this is in a private github repo because we do some custom steps and set some settings that don’t apply to everyone, but if you create a new cookbook and follow these steps, you should be set.)
Create a cookbook

knife cookbook create my-loadbalancer

Next, change the recipes/default.rb to look like this:

# Pull the certs from the encrypted databag
wildcard_cert = Chef::EncryptedDataBagItem.load("secrets","wildcard")
my_cert = wildcard['cert'].chomp # you may not need this chomp, but I did
my_key = wildcard['key'].chomp # you may not need this chomp, but I did
# feed the cert and key into the chef template
template "/etc/ssl/private/haproxy.pem" do
source "haproxy_pem.erb"
owner "root"
group "root"
mode 0400
variables(:wildcard_key => my_key,
:wildcard_crt => my_cert)
end
# Install haproxy and we are using a forked version of haproxy to install 1.5-17 from source and add SSL
include_recipe "haproxy::app_lb"

Add this template to your cookbook. The template we used for this file haproxy.pem is pretty basic. Here are the contents of templates/default/haproxy_pem.erb

<%= @wildcard_crt %>
<%= @wildcard_key %>

The line that calls include_recipe “haproxy::app_lb” is actually installing our forked version of the haproxy cookbook which adds the below line to the file templates/default/haproxy-app_lb.cfg.erb to setup ssl binding.

bind 0.0.0.0: ssl crt /etc/ssl/private/haproxy.pem

You can check out our fork for the chef-haproxy cookbook to see how we install from source, what default attributes you can set and how we have our haproxy.cfg template using the ssl certs.

To recap, we uploaded our cert to an encrypted databag, added a recipe to pull that out and put it in a file (haproxy.pem) and we changed the haproxy cookbook to use that file to handle ssl certs. Hope this helps and if you run into any problems let me know.

3 Comments

Filed under DevOps, Security

Must Read: The Phoenix Project

The Phoenix ProjectHave you read the famous systems-management novel The Goal?  No, I know you haven’t, don’t feel bad, I only got to it this year myself.

Well, Gene Kim, entrepreneur, consultant, founder of Tripwire, and general insatiable Tweeter, has written a sequel of sorts his Visible Ops coauthors Kevin Behr and George Spafford.  Bearing the tongue-twisting title The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win, it’s like The Goal in that it’s written as a novel about the people in a large IT shop and how they’re faced will all the usual soul-crushing BS that we all get faced with, but use Lean and DevOps and pluck and courage to overcome it.

I got to be a pre-reader on large chunks of the book and I like it; it definitely has characters and situations directly torn from your IT department. (Man, the security guy… Just like every security guy…) And I’ve seen these techniques work, so I know it’s not just a wish-fulfillment novel.

If you’re wondering how DevOps can help you because “you live in the real world, man,” this is a good read that’ll give you some ideas along those lines! I’ve seen Gene speak at everywhere from AppSec USA to South by Southwest Interactive to DevOpsDays to Velocity… Go read the testimonials from everyone from Cockroft to Humble to even yours truly and then buy the book!

3 Comments

Filed under DevOps, Security

Awesome Austin Events!

Besides the “big one,” DevOpsDays Austin 2013, there’s a bunch of great events going on in Austin for techies.

The Agile Austin DevOps SIG meets every last Wednesday over lunch at Bazaarvoice; lunch is provided.  This month’s meeting on January 30 is Breaking the Barriers.

The Austin Cloud User Group meets every third Tuesday in the evening at Pervasive; dinner is provided. This month’s meeting is January 15, sponsored by Canonical, and there is a talk on Openstack Quantum, the network virtualization platform.

South by Southwest Interactive is here of course on March 8-12.

Hip security event BSides Austin is on March 21-22.

Data Day Austin is on January 29.

Texas Linux Fest will be May 31 – June 1.

It’s never been a better time to be a techie in Austin!

3 Comments

Filed under Cloud, Conferences, DevOps

DevOpsDays Austin 2013 Is Coming!

devopsdaysaustinIt’s been quiet here on the blog but we’ve been busy… And one of those items of busy-ness is setting up DevOpsDays Austin 2013!

Many of you came to DevOpsDays Austin 2012, the largest super-awesome Austin DevOps event ever!  Well, it’s even bigger this year.  Registration is open for DevOpsDays Austin 2013! Sign up quick to be assured a spot.

It’ll be April 30th and March 1st at the Marchesa in the middle of Austin. Because we had to pay for a venue this time to let more people attend, and to prevent no-shows from taking up the limited slots, we’ve instituted a $120 early bird fee for the event, which covers the food/venue/shirts for both days.

Proposals are open too, as are opportunities for companies to sponsor – the more sponsorships, the more cool activities and goodies for everyone involved!

I had a blast at DevOpsDays Austin in 2012 and this year stands to be huge.  Come on out and share expert tips with other elite DevOps practitioners from around the world! Patrick Debois and a growing list of other respected DevOps ninjas will be in attendance.

Email organizers-austin-2013@devopsdays.org with questions!

Leave a comment

Filed under Conferences, DevOps

AppSec USA 2012 Is Here (in Austin)!

AppSec USA 2012, the big OWASP security convention, is here in Austin this year!  And the agile admin’s own @wickett is coordinating it.

“Why do I care if I’m not a security wonk,” you ask? Well, guess what, the security world is waking up and smelling the coffee – this isn’t like security conventions were even just a couple years ago.  There’s a Cloud track and a Rugged DevOps track.

We have like 20 people going from Bazaarvoice. It’s two days, Thursday and Friday (yes, tomorrow – I don’t know why James didn’t post this earlier, sorry) and just $500. So it’s cheap and low impact.

And who’s speaking?  Well, how about Douglas Crockford, inventor of JSON?  And Gene Kim, author of Visible Ops?  That’s not the usual infosec crowd, is it?  Also Michael Howard from Microsoft, Josh Corman from Akamai, a trio of Twitter engineers, Nick Galbreath (formerly of Etsy), Jason Chan from Netflix, Brendan Eich from Mozilla… This is a star-studded techie event that you want to be at!

I’ll be there and will report in…

1 Comment

Filed under Conferences, DevOps, Security

Speeding Up Releases

Hi all!  My new job’s been affording me few opportunities for blogging, but I’m getting into the groove, so you should see more of me now.

Releasing All The Time!

Continuous integration is the bomb.  We can all generally agree on that.  But my life has become one of halfway steps that I think will be familiar to many of you, and I don’t believe in hiding the real world that’s not all case study perfect out there.  So rather than give you the standard theory-list of “what you should do for nice futuristic DevOps releases,” let me tell you of our march from a 10 week to 2 week to 1 week release tempo at Bazaarvoice.

Biweekly Releases!

I started with BV at the start of February of this year. They said, “Our new release manager!  We’ve been waiting for you!  We adopted agile and then tried to move from our big-bang 10 week release cycle to 2 weeks and it blew up like you wouldn’t believe.  Get us to two week releases.  You’ve got a month. Go!”  The product management team really needed us to be able to roll out features more quickly, do piloting and A/B testing, and generally be way more agile in delivery to the customer and not just in dev-land.

Background – our primary application is for the collection and display of user generated content – for example, ratings and reviews – and a lot of the biggest Internet retailers use our solution for that purpose. The codebase started seven years ago and grew monolithically for much of that time . (“The monolith” was the semi-affectionate code name for the stack when I started, as in “is your app’s code on the monolith?”) The app is running across multiple physical and cloud based datacenters and pushing out billions of hits a day, so there’s a low tolerance window for errors – our end user facing display apps have to have zero downtime for releases, though we can do up to two hours of downtime in a 3-5 AM window for customer administrative systems. Stack is Java, Linux, mySQL, Solr, et al. Extremely complex, just like any app added on to for years.

There had been a SWAT team formed after the semi-disastrous 2 week release that identified the main problems.  Just like everywhere else, the main impediments were:

  • Lack of automation in testing
  • Poor SCM code discipline

Our CTO was very invested in solving the problem, so he supported the solution to #1 – the QA team hired up and got some automation folks in, and the product teams were told they had to stop feature development and do several sprints of writing and automating tests until they could sustain the biweekly cadence.

The solution to #2 had two parts.  One was a feature flagging system so we could launch code “dark.” We had a crack team of devs crank this one out. I won’t belabor it because Facebook etc. love to do DevOps presentations on the approach and its benefits, but it’s true.  Now we release all code dark first, and can enable it for certain clients or other segments.

Two was process – a new branching process where a single release branch comes off trunk every two weeks several days before release, and changes aren’t allowed to it except to fix issues found in QA, and those are approved and labeled into discrete release candidates. The dev environment gets trunk twice a day, the QA environment gets branch every time a new release candidate is labeled. Full product CIT must be passing to get a release candidate. As always, process steps like this sound like common sense but when you need 100 developers in 10 teams to uptake them immediately, the little issues come out and play.

There were a couple issues we couldn’t fix in the time allotted.  One was that our Solr indexes are Godawful huge.  Like 20 GB huge.  JVM GC tuning is a popular hobby with us. To make changes, reindex, and distribute the indexes in time to perform a zero-downtime deployment, with replication lag nipping at our heels, was a bigger deal.  The other was that our build and deploy pipeline was pretty bad.  All the keywords you want to hear are there – Puppet, TeamCity, Rundeck, svn, noah, maven/Nexus, yum…  But they are inconsistently implemented, embedded in a huge crufty bash script framework and parts have gone largely untended.

The timeframe was extremely aggressive.  I project managed the hell out of it and all the teams were very on board and helpful, and management was very supportive.  I actually got a slight delay, and was grateful for it, because our IPO date came up on the same date when we were supposed to start biweekly releases, and even the extremely ambitious were taken aback by the risk of cocking up the service on that day. We did our first biweekly release on March 6th and then every two weeks thereafter.  We had a couple rough patches, but they were good learning experiences.

For example, as our first biweekly release day approached, tests just weren’t passing. I brought all the dev managers to the go/no-go meeting (another new institution) and asked them, “are we go?” (The release manager role had been set up by upper management as more prescriptive, with the thought I’d be sitting there yelling at them “It’s no-go,” but that’s really not an effective long term strategy).  They all kinda shuffled, and hemmed, and hawed (a lot of pressure from internal stakeholders wanted this release to go out NOW), but then one manager said “No, we’re no go.  It’s just not safe.” Once she said that everyone else got over that initial taboo of saying “no go” and concurred that some of their areas were no go.  The release went out 5 calendar days late but a lot more smoothly than the last release did (44 major issues then, 5 this time).

The next release, though, was the real make-or-break.  On the one hand everyone had a first real pass through the process and so some of the “but I didn’t know I needed to have testing signoff by that day and time” breaking-in static was gone, but on the other hand they’d had 2 months between the previous two releases to test and plan, and this one allowed only two weeks.  It went off with no delay and only 1 issue.

Of course, we had deliberately sandbagged that a little because it coincided a with ‘test development only” sprint.  But anyone who thinks a complex release in a large scale environment will go smoothly just because you’re deploying code with no functional changes has clearly never been closer than a 10-foot pole to real world Web operations. As we ramped back up on feature development, the process was also becoming more ingrained and testing better, so it went well.

We had one release go bad in May, and when we looked at it we realized a lot of changes weren’t being sufficiently QA’ed.  So what we did was simply add a set of fields to all JIRA tickets for the team to specify who tested the change, and we wrote a script to parse our Subversion commit comments and label JIRA tickets with the appropriate release (trying to get people to actually fill out tickets correctly is pain and usually doomed to failure, so we made an end run with automation).  So then as a release came up, on a wiki page is a list of all the tickets in the release and who tested them and how (automatic, manual, did not test). We actually did this for two releases with paper printouts and physical signoffs to develop the process before we automated it.  This corrected the issue and we ran from then on with very low problem rates. As advertised, releasing fewer changes more frequently allows us to get both a higher throughput of changes and, paradoxically, higher quality with them.

Weekly Releases!

The process worked great through the summer. In the biweekly release communication and presentations, I had explained we’d be moving to weekly and then to continuous deployment as soon as we could make it happen. Well, the solr index distribution problem took a while – two reorgs kicked it around and it was an ambitious “use bittorrent to distribute the index to all the servers in our various DCs” pretty propellerhead kind of thing that had to happen. It took the summer to get that squared away. In the meantime I also conducted a project internally called “Neverland” to fix some of the most egregious technical debt in our TeamCity and Nexus setup and deployment scripts.

The real testament to the culture change that happened as part of the biweekly release project is that while that project was a “big deal” – I had stakeholders from all over the business, big all hands presentations, project plans out the yin-yang, the entire technical leadership team sweating the details – moving from biweekly to weekly releases was largely a non-event.

The QA team worked in the background leading up to it to push test automation levels up higher. Then we basically just said “Hey, you guys want to release faster don’t you?” “Well sure!” “OK. we’re going weekly in two weeks. Check out the updated process docs.” “All right.” And we did, starting the first release in September.  The Solr index got reindexed and redistributed (and man, it had been a while – it compacted down nicely) and deployment ran great. No change in error rate at all. We’ve been weekly since then, the only change  is when we don’t release during critical change freeze windows around Black Friday/Cyber Monday and other holiday prime times. We think our setup is robust enough that it’s safe to release even then, but, heck, no one’s perfect so it’s probably prudent to pause, and many of our clients are really adamant about holiday change freezes to us and to all their suppliers.

The one concern voiced by engineers about the overhead of the release process was addressed by automating it more and by educating.  For example, the go/no-go meeting was, at times, a little messy. Some of the other teams (especially ones not located in Austin) wouldn’t show up, or test signoffs wouldn’t be ready, and it would turn into delays and running around. The opportunity to do it more quickly actually helped a lot! Whereas the meeting had been 30 minutes if we were lucky when we started, now the meeting is taking 5 minutes, and only longer when someone screws around and doesn’t dial into the Webex on time.

“If it’s painful, do it more often” is a message that some folks still balk at when confronted with, but it is absolutely true.

Now, the path wasn’t easy and I was blessed with a very high caliber of people at Bazaarvoice – Dev, Ops, and QA. Everyone was always very focused on “how do we make this work” or “how do we improve this” with very little of the turf warring, blocking, and politics that I sadly have come to expect in a corporate environment. The mindset is very much “if we come up with a new way that’s better and we all agree on that, we will change to do that thing TOMORROW and not spend months dithering about it,” which is awesome and helped drive these changes through much faster than, honestly, I initially estimated it would take.

Releasing All The Time!

Continuous integration on “the monolith” was a distant myth initially, but now we’re seeing how we can get there and the benefits we’ll reap from doing so. Our main impediments remaining are:

1. CIT not passing.  We don’t have a rule where if CIT is failing checkins are blocked, mainly because there’s a bunch of old legacy tests that are flaky. This often results in release milestones being delayed because CIT isn’t passing and there’s 6 devs’ checkins in the last failing build. Step 1 is fix the flaky tests and step 2 is declare work stoppage when CIT is failing. The senior developers see the wisdom in this so I expect it to go down without much friction. Again, the culture is very much about ruthlessly adopting an innovation if the key players agree it will be beneficial.

2. Builds, CIT, and deployment are slow as molasses in January. Build 1 hour, CIT 40 minutes, deploy 3 hours. Why? Various legacy reasons that give me a headache when I have to listen to them. Basically “that’s how it is now, and complete rewrite is potentially beyond any one person’s ability and definitely would take multiple man-months.” We’re analyzing what to do here. We also have a “staging” environment customers use for integration, and so currently we have to deploy to dev, test, deploy to QA, test, deploy to staging (hitting the downtime window), test, deploy to production (hitting the downtime window), test. So basically 2 days minimum. However, staging is really production and step one is release them at the same time.  There’s a couple “but I can only test this kind of change in staging” items left that basically just require telling someone “Figure out how to test it in QA now.” Going to “always release trunk” will remove the whole branch deployment and separate dev and QA environments. So that’s 2 of 4 deployments removed, but then it’s a matter of figuring out cost vs benefit of smashing down parts of that 4:40. I have one proposal in front of me for chucking all the current deploy infrastructure for a Jenkins-driven one, I need to figure out if it is complete enough…

Am I Doing It Wrong?

Chime in in the comments below with questions or if there’s some way I could have cut the Gordian knot better.  I think we’ve moved about as fast as you can given a lot of legacy code and technical debt (and having a lot of other stuff people need to be working on to keep a service up and get out new functionality).   The three step process I used that works, as it does so often, was:

  1. Communicate a clear vision
  2. Drive execution relentlessly
  3. Keep metrics and continually improve

Thanks for reading, and happy releasing!

1 Comment

Filed under DevOps