Meet The Agile Admins At Velocity/DevOpsDays Silicon Valley!

Three of the four agile admins (James, Karthik, and myself) will be out at Velocity and DevOpsDays this week. Say hi if you see us!

James will be doing a workshop with Gareth Rushgrove on Tuesday 9-10:30 AM, “Battle-tested Code without the Battle – Security Testing and Continuous Integration.” Get hands on with gauntlt and other tools! [Conference site] [Lanyrd]

Ernest is doing a 5 minute sponsor keynote on Thursday, “A 5 Minute Checklist for Application Monitoring.” OK, so it’s during the USA vs Germany game – come see me anyway!  I hate keynote sales pitches so I’m not doing one, I’ll be talking about a Lean approach to monitoring and stuff to cover in your MVP. There’s a free white paper too since what can you really say in 5 minutes? And so you know what to expect, the hashtag you’ll want to use is #getprobed! [Conference site] [Lanyrd]

Leave a comment

Filed under Conferences

Monitoring and the State of DevOps

If you haven’t read the new  2014 State of DevOps Report from Puppet Labs and other luminaries, check it out now!

I also pulled out some of their findings on monitoring to inspire a post for the Copperegg blog, Monitoring and the State of DevOps, which I thought I’d mention here too.

Leave a comment

Filed under DevOps, Monitoring

Filtering Your Datadog Event Stream

At both NI and Bazaarvoice I was a Datadog user; I wrote a piece for them on filtering the event stream that has just been published on the Datadog blog.  Check it out!

Leave a comment

Filed under DevOps

Agile Organization: Separate Teams By Discipline

This is the first in the series of deeper-dive articles that are part of Agile Organization Incorporating Various Disciplines. It’s very easy to keep reorganizing and trying different models without actually learning from the process. I’ve worked with all of these so am trying to condense the pros and cons to help people understand the implications of the type of organizational model they choose.

The Separate Team By Discipline Model

Separate teams striated by discipline is the traditional method of organizing technical teams – segmented horizontally by technical skill.  You have one or more development teams, one or more operations teams, one or more QA teams.  In larger shops you have even more horizontal subdivisions – like in an enterprise IT shop, under the banner of Infrastructure you might have a data center team, a UNIX admin team, a SAN team, a Windows admin team, a networking team, a DBA team, a telecom team, applications administration team(s), and so on. It’s more unusual to have the dev side specifically segmented horizontally by tech as well (“Java programmers,” “COBOL programmers,” “Javascript programmers”) but not unheard of; it is more commonly seen as “UX team, services team, backend team…”

separateteamsIn this setup in its purest form, each team takes on tasks for a given product or project inside their team, works on them, and either returns them to the requester or passes them through to yet another team. Team members are not dedicated to that product or effort in any way except inasmuch as the hours they spend working on the to-do(s). Usually this is manifested as a waterfall approach, as a product or feature is conceived, handed to developers to develop, handed to QA to test, and finally handed to Operations to deploy and maintain.

This model dates back to the mainframe days, where it works pretty well – you’re not innovating on the infrastructure side, the building’s been built, you’re moving your apps into the pre-built apartment units. It also works OK when you have heavy regulation requirements or are constrained to extensively documenting requirements, then design, etc. (government contracts, for example).

It works a lot less well when you need to move quickly or need any kind of change or innovation from the other teams to achieve your goal. Linking up prioritization across teams is always hard but that’s the least of the issues. Teams all have their own goals, their own cadences, and even their own cultures and languages. The oft-repeated warning that “devs are motivated to make changes and ops is motivated by system stability” is a trivial example of this mismatch of goals. If the shared teams are supporting a limited number of products it can work. When there are competing priorities, I’ve seen it be extremely painful.  I worked in a shop where the multiple separate dev teams were vertical (line of business organized) but the operations teams were horizontal (technical specialty organized) – and frankly, the results of trying to produce results with the impedance mismatch generated by that setup was the nightmare that sent me down the Agile and DevOps path initially.

Benefits of Disciplinary Teams

The primary benefit of this approach is that you tend to get stable teams of like individuals, which allows them to bond with those of similar skills and experience organizational stability and esprit de corps. Providing this sense of comfort ends up being the key challenge of the other organizational approaches.

The second benefit is that it provides a good degree of standardization across products – if one ops team is creating the infrastructure for various applications, then there will be some efficiencies there.  This is almost always at least partially, and sometimes more than entirely, counteracted in value by the fact that not all apps need the same thing and that centralized teams bottleneck delivery and reduce velocity. I remember the UNIX team that would only provide my Web team expensive servers, even though we were highly horizontally scaled and told them we’d rather have twice as many $3500 servers instead of half as many $7000 servers as it would serve uptime, performance, etc. much better. But progress on our product was offered up upon the altar of nominal cost savings from homogeneity.

The third benefit is that if the horizontal teams are correctly cross-trained, it is easier avoid having single points of failure; by collecting the workers skilled in something into one group, losses are more easily picked up by others in the group. I have to say though, that this benefit is often more honored in the breach in my experience – teams tend to naturally divide up until there’s one expert on each thing and managers who actively maintain a portfolio and drive crosstraining are sadly rare.

Drawbacks of Disciplinary Teams

Conway’s Law is usually invoked to worry about vertical divisions in a product – one part of the UI written by one team, another by another, such that it looks like a Frankenstein’s monster of a product to the end user. However, the principle applies to horizontal divisions as well – these produce more of a Human Centipede, with the issue of one phase becoming the input of the next. The front end may not show any clear sign of division, but the seams in quality, reliability, and agility of the system can grow like a cancer underneath, which users certainly discover over time.

This approach promotes a host of bad behaviors. Pushing work to other people is always tempting, as is taking shortcuts if the results of those shortcuts fall on another’s shoulders. With no end to end ownership of a product, you get finger pointing and no one taking responsibility for driving the excellence of the service – and without an overall system thinking perspective, attempts at one of the teams in that value chain to drive an improvement in their domain often has unintended effects on the other teams in that chain that may or may not result in overall improvement. If engineers don’t eat their own dog food, but pass it on to someone else, then chronic quality problems often result. I personally spent years trying to build process and/or relationships to try to mitigate the dev->QA->ops passing of issues downstream with only mixed success.

Another way of stating this is that shared services teams always provide a route to the tragedy of the commons. Competing demands from multiple customers and the need for “nonfunctional” requirements (performance, availability, etc.) could potentially be all reconciled in priority via a strong product organization – but in my experience this is uncommon; product orgs tend to not care about prioritization of back end concerns and are more feature driven. Most product orgs I have dealt with have been more or less resistant to taking on platform teams, managing nonfunctional requirements, and otherwise interacting with that half of demands on the product. Without consistent prioritization, shared teams then become the focus of a lot of lobbying by all their internal customers trying to get resource. These teams are frequently understaffed and thus a bottleneck to overall velocity.

Ironically, in some cases this can be beneficial – technically, focusing on cost efficiency over delivering new value is always a losing game, but some organizations are self-unaware enough that they have teams continuing to churn out “stuff” without real ROI associated (our team exists therefore we must make more), in which case a bottleneck is actually helpful.

Mitigations for the weaknesses of this approach (abdication of responsibility and bottlenecking constraints) include:

  1. Very strong process guidance. “If only every process interface is 100% defined, then this will work”, the theory goes, just as it works on a manufacturing line.  Most software creation, however, is not similar to piecing components together to make an iPod. In one shop we worked for years on making a system development process that was up to this task, but it was an elusive goal. This is how, for example, Microsoft makes the various Office products look the same in partial defiance of Conway’s Law – books and books of standards.
  2. Individuals on shared teams with functional team affinities. Though not going as far as embedding into a product team, you can have people in the shared teams who are the designated reps for various client teams. Again, this works better when there is a few-to-one instead of a many-to-one relationship.  I had an ops team try this, but it was a many-to-one environment and each individual engineer ended up with three different ownership areas, which was overwhelming. In addition, you have to be careful not to simply dedicate all of one sort of work to one person, as you then create many single points of failure.
  3. Org variation: Add additional crossfunctional teams that try to bridge the gap.  At one place I worked, the organization had accepted that trying to have the systems needs of their Web site fulfilled by six separate infrastructure teams was not working well, so they created a “Web systems” team designed to sit astride those, take primary responsibility, and then broker needs to the other infrastructure teams. This was an improvement, and led to the addition of a parallel team responsible for internal apps, but never really got to the level of being highly effective. In addition those were extremely high-stress roles, as they bore responsibility but not control of all the results.

Conclusion

Though this is the most typical organization of technology teams historically, that history comes from a place much different than many situations we find ourselves in today. The rapid collaboration approach that Agile has brought us, and the additional understanding that Lean has given us in the software space, tells us that though this approach has its merits it is much overused and other approaches may be more effective, especially for product development.

Next, we’ll look at embedded crossfunctional service teams!

2 Comments

Filed under Agile, DevOps

Use Gauntlt to test for Heartbleed

Heartbleed is making headlines and everyone is making a mad dash to patch and rebuild. Good, you should. This is definitely a nightmare scenario but instead of using more superlatives to scare you, I thought it would be good to provide a pragmatic approach to test and detect the issue.

@FiloSottile wrote a tool in Go to check for the Heartbleed vulnerability. It was provided as a website in addition to a tool, but when I tried to use the site, it seemed over capacity. Probably because we are all rushing to find out if our systems are vulnerable. To get around this, you can build the tool locally from source using the install instructions on the repo. You need Go installed and the GOPATH environment variable set.

go get github.com/FiloSottile/Heartbleed
go install github.com/FiloSottile/Heartbleed

Once it is installed, you can easily check to see if your site is vulnerable.
Heartbleed example.com:443

Cool! But, lets do one better and implement this as a gauntlt attack so that we can make sure we don’t have regressions and so that we can automate this a bit further. Gauntlt is a rugged testing framework that I helped create. The main goal for gauntlt is to facilitate security testing early in the development lifecycle. It does so by wrapping security tools with sane defaults and uses Gherkin (Given, When, Then) syntax so it easily understood by dev, security and ops groups.

In the latest version of gauntlt (gauntlt 1.0.9) there is support for Heartbleed–it should be noted that gauntlt doesn’t install tools, so you will still have to follow the steps above if you want the gauntlt attacks to work. Lets check for Heartbleed using gauntlt.

gem install gauntlt
gauntlt --version

You should see 1.0.9. Now lets write a gauntlt attack. Create a text file called heartbleed.attack and add the following contents:

@slow
Feature: Test for the Heartbleed vulnerability

Scenario: Test my website for the Heartbleed vulnerability (see heartbleed.com for more info)

Given "Heartbleed" is installed
And the following profile:
| name | value |
| domain | example.com |
When I launch a "Heartbleed" attack with:
"""
Heartbleed <domain>:443
"""
Then the output should contain "SAFE"

You now have a working gauntlt attack that can be hooked into your CI/CD pipeline that will test for Heartbleed. To see this example attack file on github, go to https://github.com/gauntlt/gauntlt/blob/master/examples/heartbleed/heartbleed.attack.

To run the attack

$ gauntlt ./heartbleed.attack

You should see output like this
$ gauntlt ./examples/heartbleed/heartbleed.attack
Using the default profile...
@slow
Feature: Test for the Heartbleed vulnerability

Scenario: Test my website for the Heartbleed vulnerability (see heartbleed.com for more info) # ./examples/heartbleed/heartbleed.attack:4
Given "Heartbleed" is installed # lib/gauntlt/attack_adapters/heartbleed.rb:4
And the following profile: # lib/gauntlt/attack_adapters/gauntlt.rb:9
| name | value |
| domain | example.com |
When I launch a "Heartbleed" attack with: # lib/gauntlt/attack_adapters/heartbleed.rb:1
"""
Heartbleed <domain>:443
"""
Then the output should contain "SAFE" # aruba-0.5.4/lib/aruba/cucumber.rb:131

1 scenario (1 passed)
4 steps (4 passed)
0m3.223s

Good luck! Let me (@wickett) know if you have any problems.

1 Comment

Filed under DevOps, Security

Amazon Cuts Prices Too

Well, if nothing else I’m happy to have Google Cloud around to provide some competition to push Amazon Web Services.  Immediately after Google announced dramatic price drops, Amazon has responded doing the same!

Now if they can only also shame them into dropping their whole crazy reserve instance scheme and go to progressive discounts like Google just did too, the world will be better.

5 Comments

by | March 27, 2014 · 7:10 am

Google Cloud Update

We had a little get-together here in Austin today, sponsored by MomentumSI and hosted at Capital Factory (thanks to both!), to view the Google Cloud Platform newest product announcement webcast. About 24 local engineers showed up to watch.

You can view the whole thing yourself here, or just read my notes from the event.

Cloud Is Hard

Their thesis statement was that cloud, while cool, is still too hard for many people, hindering adoption or slowing innovation. So they’ve worked on making it easier.

Cost

Cost calculation is super complex (reserve, on demand, etc.). They talk about “other industry standard clouds” by which they mean Amazon Web Services. They note the drawbacks to reserved instances, which I am all totally in agreement on (see my earlier article Why Amazon Reserve Instances Torment Me for more on that). Specifically they note that reservations constrain your design choices, which is one of the great reasons to go to the cloud in the first place – Amen, brother!

Though cloud prices have been dropping 6-8% a year, hardware’s been dropping 20-30%. Why is Moore’s Law not translating into more sweet green in our pockets? It should, they contend. Thus, they are announcing on demand price drops:

  • GCE 32% price drop
  • Storage is now .026 cents/GB for any use
  • .02 c/GB for reduced durability storage
  • bigquery 85% reduction
  • can now purchase predictable throughput

Introducing sustained use discounts – no pre-plan or reserving ahead of time, instead prices automatically drop as VM usage is sustained over 25% of the month and then progressively from there. 100% use is a 53% discount over current (remember that includes the new 32% reduction, so another 21% from current for continued use). With linear machine cost scaling, makes it simple(r) to predict and calculate your costs.

Other Tradeoffs

Current cloud (hint: AWS) forces other tradeoffs: time to market vs scalability, flexibility (iaas) vs automatic management (paas), big data vs realtime data analysis.

But first, we interrupt our messaging to talk about other random new features based on customer feedback. To wit:

  • SuSE/Red Hat support
  • Windows Server 2008 R2 (preview) support
  • Cloud DNS service, accessible via API and console

The features are nice but even nicer was that they implemented these based on customer feedback, which means they consider this a real product with real customers and not just a fun tech thing for their own ends (which to be fair 80% of Google’s offerings are, and it can be hard to tell the difference).

Time to Market vs Scalability

So on scaling… You need deployment! Troubleshooting! Use tools you know!
They have a new “gcloud” command line tool
“gcloud init” pulls down the app via git, you can just edit, git commit, git push
They have a build service integrated – it spins up a jenkins/maven and builds, deploys – you can see release status in the console.
There’s also a new unified logs viewer with basic searching – like Splunk junior, with one cool dev feature. Click on the code in the stack trace and you’re put directly into the code in the console’s source view. Fix and commit, it auto-builds, bam you’re fixed.

IaaS vs PaaS

A new halfway state – “managed VMs.” It’s the normal PaaS, but in the config, you can tell it things to apt-get install onto the instances, so you can have more third party software than the PaaS previously allowed.
Also, you can “enable debugging” on an instance and then log in interactively.

Big Data vs Realtime Data Analysis

They’ve upped BigQuery to have 100k rows/sec ingest.
Example Demo: smart monitoring of 60 events/hour from 400k glen canyon power meters (17bn events/mo), with about 128k records. They did a visualization that is updating in near real time showing all those meters geolocated and you can go click on them to get realtime data.
He showed the complex BigQuery “bigjoin” to filter by meter lat/long from sep table and then by quartile across whole population. “Doing this in NoSQL would be impossible or very slow.”

They will be doing a Google Cloud roadshow soon – see cloud.google.com/roadshow – it looks like Austin will be on the list of cities!

Analysis

The good thing about getting a bunch of techies together to view this was the discussion afterwards.  The general sentiment was that:

1. The cost drops are nice and the approach to reserve/sustained use instances is much better. The reserve instance scheme is one of the worst things about AWS and if this drives them to adopt the same model, hooray!

2. The other new features (managed VMs, gcloud) are definitely nice. They are focusing on dev friendliness in their discussion but it’s a lot less clear how to operate this. If you’re really trying to stitch together a bunch of micro-services there’s not a lot of great support for that. They talk about using their PaaS and say “of course, if you use our PaaS you don’t need to carry a pager! You’d only need to do that if you’re doing IaaS and maintaining your own OSes.” That is dangerously naive and really made the whole group skittish. Most people there have done “play” things in Google’s cloud but are reticent to put mission critical items there, and this section of the presentation didn’t do a lot to improve that.

3. The BigQuery/realtime demo was impressive and multiple people would like to kick the tires on it.

Overall – it was a little light, but it was a keynote; the new features/pricing are all good; this shows more Google commitment to their cloud as a product but actual concerns still linger about maturity and suitability for realistically complex revenue-generating production applications.

 

Leave a comment

Filed under Cloud