Tag Archives: DevOps

Busting the Myths of Agile Development: What People are Really Doing

I just watched a good Webcast from an IBM agile expert about the state of Agile in the industry, and it had some interesting bits that touch upon agile operations.

Webcast – (registration required, sadly)

It’s by Scott Ambler, IBM’s practice leader for agile development.  They surveyed programmers using Dr. Dobbs’ “IT State of the Union” survey, which has wide reach across the world and types of programmers; the data in this presentation comes from that and other surveys.  All their surveys and results and even detailed response data (they’ve done a lot over time) are online for your perusal.

In the webcast, he talks some about “core” agile development extending to “disciplined” agile development that extends to address the full system lifecycle, is both risk and value driven, and has governance and standards.  “Core” begs the question of “where do requirements and architecture come from?” and “how do I get into production?”   Some agile folks who consider themselves purists say these aren’t needed and you should just start coding, he calls this view “phenomenally naive.”

He does mention some times where things like enterprise architecture were slow and introduced huge delays into the process because it took months to do reviews and signoffs.  He calls this “dysfunctional” but really isn’t it the way things are unless there’s a pattern to change it? I think project management and enterprise architecture are suffering from the same problem we operations folks are, which is that we’re just now figuring out “what does it mean to incorporate agile concepts into what we do?”

The meat of the preso is over agile team metrics and busting myths about “it’s only for small colocated teams…”  Here’s my summary notes.

  • Agile teams have a success rate higher than traditional teams.  Agile and iterative about tie and come in with much higher (2-4x) result on quality, functionality, cost efficiency, and timeliness than traditional or ad hoc processes.
  • Agile is for not just coding, but project selection/initiation, transition, ops, and maintenance.  More and more folks are doing that, but some get stuck with doing agile only in the coding and not the surrounding part of the lifecycle.
  • Agile is being used by co-located teams in only 45% of the time; all the rest are distributed in some manner.  But that does affect success some  – “far located” teams have 20% lower success than fully co-located.  Don’t distribute if you don’t have to.
  • Most orgs are using agile with small teams, the vast majority are size 10 or less.  But they see success with even the very large teams.
  • We’re seeing people successful with agile under complaince frameworks liek Sarbox, and governance frameworks like ISO – even ITIL is mentioned.
  • Agile isn’t just for simple projects – the real mix in the wild is actually weighted at medium to very complex projects.
  • Though agile is great for greenfield projects, there’s very large percentages of teams using it on legacy environments.  COTS development is the rarest.
  • 32% of successful agile teams are working with enterprise architecture and operations teams.  Should be more, but that’s a significant inroad.  He says those teams are also most successful when behaving agile (or at least lean).
  • Biggest problems with agile adoption are a waterfall culture (especially one where the overall governance everyone has to plug into is tuned to waterfall) and stakeholder involvement.  Testers say “We need a detailed spec before we can start testing…”  DBAs say “Developers can’t code until we have a complete data model…”  Management resistance is actually the lowest obstacle (14% of respondents)!

A lot of nice stats.  The two biggest takeaways are “agile isn’t just for certain kinds of projects, it’s being used for more than that and is successful in many different areas” and “agile is for the entire lifecycle not just coding.”  As advocates of agile systems, I think that’s a good sign that the larger agile community is wandering our way as we’re building up our conception of DevOps and wandering their way ourselves!

Leave a comment

Filed under DevOps

Our First DevOps Implementation

Although we’re currently engaged in a more radical agile infrastructure implementation, I thought I’d share our previous evolutionary DevOps implementation here (way before the term was coined, but in retrospect I think it hits a lot of the same notes) and what we learned along the way.

Here at NI we did what I’ll call a larval DevOps implementation starting about seven years ago when I came and took over our Web Systems team, essentially an applications administration/operations team for our Web site and other Web-related technologies.  There was zero automation and the model was very much “some developers show up with code and we had to put it in production and somehow deal with the many crashes per day resulting from it.”  We would get 100-200 on-call pages a week from things going wrong in production.  We had especially entertaining weeks where Belgian hackers would replace pages on our site with French translations of the Hacker’s Manifesto.  You know, standard Wild West stuff.  You’ve been there.

Step One: Partner With The Business

First thing I did (remember this is 2002), I partnered with the business leaders to get a “seat at the table” along with the development managers.  It turned out that our director of Web marketing was very open to the message of performance, availability, and security and gave us a lot of support.

This is an area where I think we’re still ahead of even a lot of the DevOps message.  Agile development carries a huge tenet about developers partnering side-by-side with “the business” (end users, domain experts, and whatnot).  DevOps is now talking about Ops partnering with developers, but in reality that’s a stab at the overall more successful model of “biz, dev, and ops all working together at once.” Continue reading

Leave a comment

Filed under DevOps

dev2ops Interview

Want to hear me spout off more about DevOps?  Well, here’s your chance; I did an interview with Damon Edwards of DTO and they’ve posted it on the dev2ops blog!

Killer quote:

“I say this as somebody who about 15 years ago chose system administration over development.  But system administration and system administrators have allowed themselves to lag in maturity behind what the state of the art is. These new technologies are finally causing us to be held to account to modernize the way we do things.  And I think that’s a welcome and healthy challenge.”

Leave a comment

Filed under DevOps

Before DevOps, Don’t You Need OpsOps?

From the “sad but true” files comes an extremely insightful point apparently discussed over beer by the UK devops crew recently – that we are talking about dev and ops collaboration but the current state of collaboration among ops teams is pretty crappy.

This resonates deeply with me.  I’ve seen that problem in spades.  I think in general that a lot of the discussion about the agile ops space is too simplistic in that it seems tuned to organizations of “five guys, three of whom are coders and two of whom are operations” and there’s no differentiation.  In real life, there’s often larger orgs and a lot of differentiation that causes various collaboration challenges.  Some people refer to this as Web vs Enterprise, but I don’t think that’s strictly true; once your Web shop grows from 5 guys to 200 it runs afoul of this too – it’s a simple scalability and organizational engineering problem.

As an aside, I don’t even like the “Ops” term – a sysadmin team can split into subgroups that do systems engineering, release management, and operational support…  Just saying “Ops” seems to me to create implications of not being a partner in the initial design and development of the overall system/app/service/site/whatever you want to call it.

Ops Verticals

Here, we have a large Infrastructure department.  Originally, it was completely siloed by technology verticals, and there’s a lot of subgroups.  Network, UNIX, Windows, DBA, Lotus Notes, Telecom, Storage, Data Center…  Some ten plus years ago when the company launched their Web site in earnest, they quickly realized that wasn’t going to work out.  You had the buck-passing behavior described in the blog posts above that made issues impossible to solve in a timely fashion, plus it made collaboration with devs/business nearly impossible.  Not only did you need like 8 admins to come involve themselves in your project, but they did not speak similar enough languages – you’d have some crusty UNIX admin yelling “WHAT ABOUT THE INODES” until the business analyst started to cry.

Dev Silos

But are our developers here better off?  They are siloed by business unit.  Just among the Web developers there’s the eCommerce developers, eCRM, Product Advisors, Community, Support, Content Management…  On the one hand, they are able to be very agile in creating solutions inside their specific niche.  On the other hand, they are all working within the same system environment, and they don’t always stay on the same page in terms of what technologies they are using. “Well, I’m sure THAT team bought a lovely million dollar CMS, but we’re going to buy our own different million dollar CMS.   No, you don’t get more admin resource.”  Over time, they tried to produce architecture groups and other cross-team initiatives to try to rein in the craziness, with mixed but overall positive results.

Plugging the Dike

What we did was create a Web Administration group (Web Ops, whatever you want to call it) that was holistically responsible for Web site uptime, performance, and security.  Running that team was my previous gig, did it for five years.  That group was more horizontally focused and would serve as an interface to the various technology verticals; it worked closely with developers in system design during development, coordinated the release process, and involved devs in troubleshooting during the production phase.

BizOps?

In fact, we didn’t just partner with the developers – we partnered with the business owners of our Web site too, instead of tolerating the old model of “Business collaborates with the developers, who then come and tell ops what to do.”  This was a remarkably easy sell really.  The company lost money every minute the Web site was down, and it was clear that the dev silos weren’t going to be able to fix that any more than the ops silos were.  So we quickly got a seat at the same table.

Results

This was a huge success.  To this day, our director of Web Marketing is one of the biggest advocates of the Web operations team.  Since then, other application administration (our word for this cross-disciplinary ops) teams have formed along the same model.  The DevOps collaboration has been good overall – with certain stresses coming from the Web Ops team’s role as gatekeeper and process enforcement.  Ironically, the biggest issues and worst relationships were within Infrastructure between the ops teams!

OpsOps – The Fly In The Ointment

The ops team silos haven’t gone down quietly.  To this day the head DBA still says “I don’t see a good reason for you guys [WebOps] to exist.”  I think there’s a common “a thing is just the sum of its parts” mindset among admins for whatever reason.  There are also turf wars arising from the technology silo division and the blurring of technology lines by modern tech.  I tried again and again to pitch “collaborative system administration.”  But the default sysadmin behavior is to say “these systems are mine and I have root on them.  Those are your systems and you have root on them.  Stay on your side of the line and I’ll stay on mine.”

Fun specific Catch-22 situations we found ourselves in:

  • Buying a monitoring tool that correlates events across all the different tiers to help root-cause production problems – but the DBAs refusing to allow it on “their” databases.
  • Buying a hardware load balancer – we were going to manage it, not the network team, and it wasn’t a UNIX or Windows server, so we couldn’t get anyone to rack and jack it (and of course we weren’t allowed to because “Why would a webops person need server room access, that’s what the other teams are for”).

Some of the problem is just attitude, pure and simple.  We had problems even with collaboration inside the various ops teams!  We’d work with one DBA to design a system and then later need to get support from another DBA, who would gripe that “no one told/consulted them!”  Part of the value of the agile principles that “DevOps” tries to distill is just a generic “get it into your damn head you need to be communicating and working together and that needs to be your default mode of operation.” I think it’s great to harp on that message because it’s little understood among ops.  For every dev group that deliberately ostracizes their ops team, there’s two ops teams who don’t think they need to talk to the devs – in the end, it’s mostly our fault.

Part of the problem is organizational.  I also believe (and ITIL, I think, agrees with me) that the technology-silo model has outlived its usefulness.  I’d like to see admin teams organized by service area with integral DBAs, OS admins, etc.  But people are scared of this for a couple reasons.  One is that those admins might do things differently from area to area (the same problem we have with our devs) – this could be mitigated by “same tech” cross-org standards/discussions.  The other is that this model is not the cheapest.  You can squeeze every last penny out if you only have 4 Windows admins and they’re shared by 8 functional areas.  Of course, you are cutting off your nose to spite your face because you lose lots more in abandoned agility, but frankly corporate finance rules (minimize G&A spending) are a powerful driver here.

If nothing else, there’s not “one right organization” – I’d be tempted to reorg everyone from verticals into horizontals, let that run for 5 years, and then reorg back the other way, just to keep the stratification from setting in.

Specialist vs Generalist

One other issue.  The Web Ops team we created required us to hire generalists – but generalists that knew their stuff in a lot of different areas.  It became very hard to hire for that position and training took months before someone was at all effective.  Being a generalist doesn’t scale well.  Specialization is inevitable and, indeed, desirable (as I think pretty much anything in the history of anything demonstrates).  You can mitigate that with some cross-training and having people be generalists in some areas, but in the end, once you get past that “three devs, two ops, that’s the company” model, specialization is needed.

That’s why I think one of the common definitions of DevOps – all ops folks learning to be developers or vice versa – is fundamentally flawed.  It’s not sustainable.  You either need to hire all expensive superstars that can be good at both, or you hire people that suck at both.

What you do is have people with varying mixes.  In my current team we have a continuum of pure ops people, ops folks doing light dev, devs doing light ops, and pure devs.  It’s good to have some folks who are generalizing and some who are specializing.  It’s not specializing that is bad, it’s specialists who don’t collaborate that are bad.

Conclusion

So I’ve shared a lot of experiences and opinions above but I’m not sure I have a brilliant solution to the problem.  I do think we need to recognize that Ops/Ops collaboration is an issue that arises with scale and one potentially even harder to overcome than Dev/Ops collaboration.  I do think stressing collaboration as a value and trying to break down organizational silos may help.  I’d be happy to hear other folks’ experiences and thoughts!

6 Comments

Filed under DevOps

Defining Agile Operations and DevOps

I recently read a great blog post by Scott Wilson that was talking about the definitions of Agile Operations, DevOps, and related terms.  (Read the comments too, there’s some good discussion.)  From what I’ve heard so far, there are a bunch of semi-related terms people are using around this whole “new thing of ours.”

The first is DevOps, which has two totally different frequently used definitions.

1.  Developers and Ops working closely together – the “hugs and collaboration” definition

2.  Operations folks uptaking development best practices and writing code for system automation

The second is Agile Operations, which also has different meanings.

1.  Same as DevOps, whichever definition of that I’m using

2.  Using agile principles to run operations – process techniques, like iterative development or even kanban/TPS kinds of process stuff.  Often with a goal of “faster!”

3.  Using automation – version control, automatic provisioning/control/monitoring.  Sometimes called “Infrastructure Automation” or similar.

This leads to some confusion, as most of these specific elements can be implemented in isolation.  For example, I think the discussion at OpsCamp about “Is DevOps an antipattern” was predicated on an assumption that DevOps meant only DevOps definition #2, “ops guys trying to be developers,” and made the discussion somewhat odd to people with other assumed definitions.

I have a proposed set of definitions.  To explain it, let’s look at Agile Development and see how it’s defined.

Agile development, according to wikipedia and the agile manifesto, consists of a couple different “levels” of thing.  To sum up the wikipedia breakdown,

  • Agile Principles – like “business/users and developers working together.”  These are the core values that inform agile, like collaboration, people over process, software over documentation, and responding to change over planning.
  • Agile Methods – specific process types.  Iterations, Lean, XP, Scrum.  “As opposed to waterfall.”
  • Agile Practices – techniques often found in conjunction with agile development, not linked to a given method flavor, like test driven development, continuous integration, etc.

I believe the different parts of Agile Operations that people are talking about map directly to these three levels.

  • Agile Operations Principles includes things like dev/ops collaboration (DevOps definition 1 above); things like James Turnbull’s 4-part model seem to be spot on examples of trying to define this arena.
  • Agile Operations Methods includes process you use to conduct operations – iterations, kanban, stuff you’d read in Visible Ops; Agile Operations definition #2 above.
  • Agile Operations Practices includes specific techniques like automated build/provisioning, monitoring, anything you’d have a “toolchain” for.  This contains DevOps definition #2 and Agile Operations definition #3 above.

I think it’s helpful to break them up along the same lines as agile development, however, because in the end some of those levels should merge once developers understand ops is part of system development too…  There shouldn’t be a separate “user/dev collaboration” and “dev/ops collaboration,” in a properly mature model it should become a “user/dev/ops collaboration,” for example.

I think the dev2ops guys’ “People over Process over Tools” diagram mirrors this about exactly – the people being one of the important agile principles, process being a large part of the methods, and tools being used to empower the practices.

What I like about that diagram, and why I want to bring this all back to the Agile Manifesto discussion, is that the risk of having various sub-definitions increases the risk that people will implement the processes or tools without the principles in mind, which is definitely an antipattern.  The Agile guys would tell you that iterations without collaboration is likely to not work out real well.

And it happens in agile development too – there are some teams here at my company that have adopted the methods and/or tools of agile but not its principles, and the results are suboptimal.

Therefore I propose that “Agile Operations” is an umbrella term for all these things, and we keep in mind the principles/methods/practices differentiation.

If we want to call the principles “devops” for short and some of the practices “infrastructure automation” for short I think that would be fine…   Although dev/ops collaboration is ONE of the important principles – but probably not the entirety; and infrastructure automation is one of the important practices, but there are probably others.

2 Comments

Filed under DevOps, Uncategorized

A Case For Images

After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article “Golden Image or Foil Ball?“, I was thinking pretty hard about the use of images in our new automated infrastructure.  He’s pretty against them.  After careful consideration, however, I think judicious use of images is the right thing to do.

My top level thoughts on why to use images.

  1. Speed – Starting a prebuilt image is faster than reinstalling everything on an empty one.  In the world of dynamic scaling, there’s a meaningful difference between a “couple minute spinup” and a “fifteen minute spinup.”
  2. Reliability – The more work you are doing at runtime, the more there is to go wrong.  I bet I’m not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of ’em.  And the more stuff you’re pulling to build your image, the more failure points you have.
  3. Flexibility – Dynamically building from stem cell kinda makes sense if you’re using 100% free open source and have everything automated.  What if, however, you have something that you need to install that just hasn’t been scripted – or is very hard to script?  Like an install of some half-baked Windows software that doesn’t have a command line installer and you don’t have a tool that can do it?  In that case, you really need to do the manual install in non-realtime as part of a image build.  And of course many suppliers are providing software as images themselves nowadays.
  4. Traceability – What happens if you need to replicate a past environment?  Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons.  “I keep a bunch of old software repo versions so I can mostly build a machine like it” – somewhat less so.

In the end, it’s a question of using intermediate deliverables.  Do you recompile all the code and every third party package every time you build a server?  No, you often use binaries – it’s faster and more reliable.  Binaries are the app guys’ equivalent of “images.”

To address Luke’s three concerns from his article specifically:

  1. Image sprawl – if you use images, you eventually have a large library of images you have to manage.  This is very true – but you have to manage a lot of artifacts all up and down the chain anyway.  Given the “manual install” and “vendor supplied image” scenarios noted above, if you can’t manage images as part of your CM system than it’s just not a complete CM system.
  2. Updating your images – Here, I think Luke makes some not entirely valid assumptions.  He notes that once you’re done building your images, you’re still going to have to make changes in the operational environment (“bootstrapping”).  True.  But he thinks you’re not going to use the same tool to do it.  I’m not sure why not – our approach is to use automated tooling to build the images – you don’t *want* to do it manually for sure – and Puppet/Chef/etc. works just fine to do that.  So if you have to update something at the OS level, you do that and let your CM system blow everything on top – and then burn the image.  Image creation and automated CM aren’t mutually exclusive – the only reason people don’t use automation to build their images is the same reason they don’t always use automation on their live servers, which is “it takes work.”  But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it’s a good conservation of work to use the same package for both. (Besides bootstrapping, there’s other stuff like moving content that shouldn’t go on images.)
  3. Image state vs running state – This one puzzles me.  With images, you do need to do restarts to pull in image-based changes.  But with virtually all software and app changes you have to as well – maybe not a “reboot,” but a “service restart,” which is virtually as disruptive.  Whether you “reboot  your database server” or “stop and start your database server, which still takes a couple minutes”, you are planning for downtime or have redundancy in place.  And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that “oh, pull that change whenever you want to Mr. Application Server” doesn’t really work for.

In closing, I think images are useful.  You shouldn’t treat them as a replacement for automated CM – they should be interim deliverables usually generated by, and always managed by, your automated CM.  If you just use images in an uncoordinated way, you do end up with a foil ball.  With sufficient automation, however, they’re more like Russian nesting dolls, and have advantages over starting from scratch with every box.

Leave a comment

Filed under DevOps, Uncategorized

Agile Operations

It’s funny.  When we recently started working on an upgrade of our Intranet social media platform, and we were trying to figure out how to meld the infrastructure-change-heavy operation with the need for devs, designers, and testers to be able to start working on the system before “three months from now,” we broached the idea of “maybe we should do that in iterations!”  First, get the new wiki up and working.  Then, worry about tuning, switching the back end database, etc.  Very basic, but it got me thinking about the problem in terms of “hey, Infrastructure still operates in terms of waterfall, don’t we.”

Then when Peco and I moved over to NI R&D and started working on cloud-based systems, we quickly realized the need for our infrastructure to be completely programmable – that is, not manually tweaked and controlled, but run in a completely automated fashion.  Also, since we were two systems guys embedded in a large development org that’s using agile, we were heavily pressured to work in iterations along with them.  This was initially a shock – my default project plan has, in traditional fashion, months worth of evaluating, installing, and configuring various technology components before anything’s up and running.   But as we began to execute in that way, I started to see that no, really, agile is possible for infrastructure work – at least “mostly.”  Technologies like cloud computing help, but there’s still a little more up front work required than with programming – but you can get mostly towards an agile methodology (and mindset!).

Then at OpsCamp last month, we discovered that there’s been this whole Agile Operations/Automated Infrastructure/devops movement thing already in progress we hadn’t heard about.  I don’t keep in touch with The Blogosphere ™ enough I guess.  Anyway, turns out a bunch of other folks have suddenly come to the exact same conclusion and there’s exciting work going on re: how to make operations agile, automate infrastructure, and meld development and ops work.

So if  you also hadn’t been up on this, here’s a roundup of some good related core thoughts on these topics for your reading pleasure!

Leave a comment

Filed under DevOps

Enterprise Systems vs. Agility

I was recently reading a good Cameron Purdy post where he talks about his eight theses regarding why startups or students can pull stuff off that large enterprise IT shops can’t.

My summary/trenchant restatement of his points:

  1. Changing existing systems is harder than making a custom-built new one (version 2 is harder)
  2. IT veterans overcomplicate new systems
  3. The complexity of a system increases exponentially the work needed to change it (versions 3 and 4 are way way harder)
  4. Students/startups do fail a lot, you just don’t see those
  5. Risk management steps add friction
  6. Organizational overhead (paperwork/meetings) adds friction
  7. Only overconservative goons work in enterprise IT anyway
  8. The larger the org, the more conflict

Though I suspect #1 and #3 are the same, #2 and #5 are the same, and #6 and #8 are the same, really.

I’ve been thinking about this lately with my change from our enterprise IT Web site to a new greenfield cloud-hosted SaaS product in our R&D organization.  It’s definitely a huge breath of fresh air to be able to move fast.  My observations:

Complexity

The problem of systems complexity (theses #1 and #3) is a very real one.  I used to describe our Web site as having reached “system gridlock.”  There were hundreds of apps running dozens to a server with poorly documented dependencies on all kinds of stuff.  You would go in and find something that looked “wrong” – an Apache config, script, load balancer rule, whatever – but if you touched it some house of cards somewhere would come tumbling down.  Since every app developer was allowed to design their own app in its own tightly coupled way, we had to implement draconian change control and release processes in an attempt to stem the tide of people lining up to crash the Web site.

We have a new system design philosophy for our new gig which I refer to as “sharing is the devil.”  All components are separated and loosely coupled.  Using cloud computing for hardware and open source for software makes it easy and affordable to have a box that does “only one thing.”  In traditional compute environments there’s pressure to “use up all that CPU before you add more”, which results in a penny wise, pound foolish strategy of consolidation.  More and more apps and functions get crunched closer together and when you go back to pull them out you discover that all kinds of new connections and dependencies have formed unbidden.

Complication

Overcomplicating systems (#2 and #5) can be somewhat overcome by using agile principles.  We’ve been delving heavily into doing not just our apps but also our infrastructure according to an agile methodology.  It surfaces your requirements – frankly, systems people often get away with implementing whatever they want, without having a spec let alone one open to review.  Also, it makes you prioritize.  “Whatever you can get done in this two week iteration, that’s what you’ll have done, and it should be working.”  It forces focus on what is required to get things to work and delays more complex niceties till later as there’s time.

Conservatism

Both small and large organizations can suffer from #6 and #8.  That’s mostly a mindset issue.  I like to tell the story about how we were working on a high level joint IT/business vision for our Web site.  We identified a number of “pillars” of the strategy we were developing – performance, availability, TCO, etc.  I had identified agility as one, but one of the application directors just wasn’t buying into it.  “Agility, that’s weird, how do we measure that, we should just forget about it.”  I finally had to take all the things we had to the business head of the Web and say “of these, which would you say is the single most important one?”  “Agility, of course,” he said, as I knew he would.  I made it a point to train my staff that “getting it done” was the most important thing, more important than risk mitigation or crossing all the t’s and dotting all the i’s.  That can be difficult if the larger organization doesn’t reward risk and achievement over conservatism, but you can work on it.

Leave a comment

Filed under DevOps, Uncategorized

OpsCamp Debrief

I went to OpsCamp this last weekend here in Austin, a get-together for Web operations folks specifically focusing on the cloud, and it was a great time!  Here’s my after action report.

The event invite said it was in the Spider House, a cool local coffee bar/normal bar.  I hadn’t been there before, but other people that had said “That’s insane!  They’ll never fit that many people!  There’s outside seating but it’s freezing out!”  That gave me some degree of trepidation, but I still racked out in time to get downtown by 8 AM on a Saturday (sigh!).  Happily, it turned out that the event was really in the adjacent music/whatnot venue also owned by Spider House, the United States Art Authority, which they kindly allowed us to use for free!  There were a lot of people there; we weren’t overfilling the place but it was definitely at capacity, there were near 100 people in attendance.

I had just heard of OpsCamp through word of mouth, and figured it was just going to be a gathering of local Austin Web ops types.  Which would be entertaining enough, certainly.  But as I looked around the room I started recognizing a lot of guys from Velocity and other major shows; CEOs and other high ranked guys from various Web ops related tool companies.  Sponsors included John Willis and Adam Jacob (creator of Chef) from Opscode , Luke Kanies from Reductive Labs (creator of Puppet), Damon Edwards and Alex Honor from DTO Solutions (formerly ControlTier), Mark Hinkle and Matt Ray from Zenoss, Dave Nielsen (CloudCamp), Michael Coté (Redmonk), Bitnami, Spiceworks, and Rackspace Cloud.  Other than that, there were a lot of random Austinites and some guys from big local outfits (Dell, IBM).

You can read all the tweets about the event if you swing that way.

OpsCamp kinda grew out of an earlier thing, BarCampESM, also in Austin two years ago.  I never heard about that, wish I had.

How It Went

I had never been to an “unconference” before.  Basically there’s no set agenda, it’s self-emergent.  It worked pretty well.  I’ll describe the process a bit for other noobs.

First, there was a round of lightning talks.  Brett from Rackspace noted that “size matters,” Bill from Zenoss said “monitoring is important,” and Luke from Reductive claimed that “in 2-4 years ‘cloud’ won’t be a big deal, it’ll just be how people are doing things – unless you’re a jackass.”

Then it was time for sessions.  People got up and wrote a proposed session name on a piece of paper and then went in front of the group and pitched it, a hand-count of “how many people find this interesting” was taken.

Candidates included:

  • service level to resolution
  • physical access to your cloud assets
  • autodiscovery of systems
  • decompose monitoring into tool chain
  • tool chain for automatic provisioning
  • monitoring from the cloud
  • monitoring in the cloud – widely dispersed components
  • agent based monitoring evolution
  • devops is the debil – change to the role of sysadmins
  • And more

We decided that so many of these touched on two major topics that we should do group discussions on them before going to sessions.  They were:

  • monitoring in the cloud
  • config mgmt in the cloud

This seemed like a good idea; these are indeed the two major areas of concern when trying to move to the cloud.

Sadly, the whole-group discussions, especially the monitoring one, were unfruitful.  For a long ass time people threw out brilliant quips about “Why would you bother monitoring a server anyway” and other such high-theory wonkery.  I got zero value out of these, which was sad because the topics were crucially interesting – just too unfocused; you had people coming at the problem 100 different ways in sound bytes.  The only note I bothered to write down was that “monitoring porn” (too many metrics) makes it hard to do correlation.  We had that problem here, and invested in a (horrors) non open-source tool, Opnet Panorama, that has an advanced analytics and correlation engine that can make some sense of tens of thousands of metrics for exactly that reason.

Sessions

There were three sessions.  I didn’t take many notes in the first one because, being a Web ops guy, I was having to work a release simultaneously with attending OpsCamp 😛

Continue reading

10 Comments

Filed under DevOps, Uncategorized