Tag Archives: Automation

They’re Taking Our Jobs!

Gartner has released a report with 2011 IT predictions, and one of the things they say is that all this DevOps (they don’t use the word) automation stuff will certainly lead to job cuts – “By 2015, tools and automation will eliminate 25 percent of labor hours associated with IT services “.

That seems like the general “oh any technical innovation will take all our jobs” argument.  Except for factory line workers, it hasn’t been the case – despite no end of technical innovations over the last 30 years, demand for IT has done nothing but increase hugely over time.

Heck, one of the real impediments to DevOps is that most larger shops are so massively underinvested in ops that there’s no way for ops teams to meaningfully collaborate on projects with devs – 100 devs with 50 live projects working with a 5 person ops team, how can you bring value besides at a very generic level?  I see automation as a necessary step to minimize busywork to allow ops to more successfully engage with the dev teams and bring their expertise to actual individual efforts.

They act like there’s a bunch of shops out there that employ 100 mostly unskilled guys that just wander around and move files around all day, and were it not for the need to kickstart Linux would be selling oranges on the roadside. That’s not the case anywhere I’ve ever been.

Did we need fewer programmers as we moved from assembly to C to Java because of a resulting reduction in labor hours?  Hell no. Maybe one day, decades from now, IT will be a zero growth industry and we’ll have to worry about efficiency innovations cutting jobs.  But that time certainly isn’t now, and generally I would expect Gartner to be in touch with the industry enough to understand that.

4 Comments

Filed under DevOps

Automated Testing Basics

Everyone thinks they know how to test… Most people don’t.  And most people think they know what automated testing is, it’s just that they never have time for it.

Into this gap steps an interesting white paper, The Automated Testing Handbook, from Software Test Professionals, an online SW test community.  You have to register for a free community account to download it, but I did and it’s A great 100-page conceptual introduction to automated testing. We’re trying to figure out testing right now – it’s for our first SaaS product so that’s a challenge,  and also because of the devops angle we’re trying to figure out “what does ‘unit test’ mean regarding an OS build or a Tomcat server before it gets apps deployed?'”  So I was interested in reading a basic theory paper to give me some grounding; I like doing that when engaging in a new field rather than just hopping from specific to specific.

Some takeaways from the paper:

  • Automated testing isn’t just capture/replay, that’s unmaintainable.
  • Automated testing isn’t writing a program to test a program, that’s unscalable.
  • What you want is a framework to automate the process around the testing.  Some specific tests can/should be automated and others can’t/shouldn’t.  But a framework helps you with both, and lets you make more reusable tests.

Your tests should be version controlled, change controlled, etc. just like software (testware?).

Great quote from p.47:

A thorough test exercises more than just the application itself: it ultimately tests the entire environment, including all of the supporting hardware and surrounding software.

We are having issues with that right now and our larval SaaS implementation – our traditional software R&D folks historically can just do their own narrow scope tests, and say “we’re done.”  Now that we’re running this as a big system in the cloud, we need to run integration tests including the real target environment, and we’re trying to convince them that’s something they need to spend effort in.

They go on to mention major test automation approaches, including capture/replay, data-driven, and table-driven – their pros and cons and even implementation tips.

  1. Capture/replay is basically recording your manual test so you can do it again a lot.  Which makes it helpful for mature and stable apps and also for load testing.
  2. Data driven just means capture/replay with varied inputs.  It’s a little more complicated but much better than capture/replay, where you either don’t adequately test various input scenarios or you have a billion recorded scripts.
  3. Table driven is where the actions themselves are defined as data as well – one step short of hardcoding a test app.  But (assuming you’re using a tool that can work on multiple apps) portable across apps…

I guess writing pure code to test is an unspoken #4, but unspoken because she (the author) doesn’t think that’s a real good option.

There’s a bunch of other stuff in there too, it’s a great introduction to the hows and whys and gotchas of test automation.  Give it a read!

Leave a comment

Filed under DevOps

A Case For Images

After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article “Golden Image or Foil Ball?“, I was thinking pretty hard about the use of images in our new automated infrastructure.  He’s pretty against them.  After careful consideration, however, I think judicious use of images is the right thing to do.

My top level thoughts on why to use images.

  1. Speed – Starting a prebuilt image is faster than reinstalling everything on an empty one.  In the world of dynamic scaling, there’s a meaningful difference between a “couple minute spinup” and a “fifteen minute spinup.”
  2. Reliability – The more work you are doing at runtime, the more there is to go wrong.  I bet I’m not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of ’em.  And the more stuff you’re pulling to build your image, the more failure points you have.
  3. Flexibility – Dynamically building from stem cell kinda makes sense if you’re using 100% free open source and have everything automated.  What if, however, you have something that you need to install that just hasn’t been scripted – or is very hard to script?  Like an install of some half-baked Windows software that doesn’t have a command line installer and you don’t have a tool that can do it?  In that case, you really need to do the manual install in non-realtime as part of a image build.  And of course many suppliers are providing software as images themselves nowadays.
  4. Traceability – What happens if you need to replicate a past environment?  Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons.  “I keep a bunch of old software repo versions so I can mostly build a machine like it” – somewhat less so.

In the end, it’s a question of using intermediate deliverables.  Do you recompile all the code and every third party package every time you build a server?  No, you often use binaries – it’s faster and more reliable.  Binaries are the app guys’ equivalent of “images.”

To address Luke’s three concerns from his article specifically:

  1. Image sprawl – if you use images, you eventually have a large library of images you have to manage.  This is very true – but you have to manage a lot of artifacts all up and down the chain anyway.  Given the “manual install” and “vendor supplied image” scenarios noted above, if you can’t manage images as part of your CM system than it’s just not a complete CM system.
  2. Updating your images – Here, I think Luke makes some not entirely valid assumptions.  He notes that once you’re done building your images, you’re still going to have to make changes in the operational environment (“bootstrapping”).  True.  But he thinks you’re not going to use the same tool to do it.  I’m not sure why not – our approach is to use automated tooling to build the images – you don’t *want* to do it manually for sure – and Puppet/Chef/etc. works just fine to do that.  So if you have to update something at the OS level, you do that and let your CM system blow everything on top – and then burn the image.  Image creation and automated CM aren’t mutually exclusive – the only reason people don’t use automation to build their images is the same reason they don’t always use automation on their live servers, which is “it takes work.”  But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it’s a good conservation of work to use the same package for both. (Besides bootstrapping, there’s other stuff like moving content that shouldn’t go on images.)
  3. Image state vs running state – This one puzzles me.  With images, you do need to do restarts to pull in image-based changes.  But with virtually all software and app changes you have to as well – maybe not a “reboot,” but a “service restart,” which is virtually as disruptive.  Whether you “reboot  your database server” or “stop and start your database server, which still takes a couple minutes”, you are planning for downtime or have redundancy in place.  And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that “oh, pull that change whenever you want to Mr. Application Server” doesn’t really work for.

In closing, I think images are useful.  You shouldn’t treat them as a replacement for automated CM – they should be interim deliverables usually generated by, and always managed by, your automated CM.  If you just use images in an uncoordinated way, you do end up with a foil ball.  With sufficient automation, however, they’re more like Russian nesting dolls, and have advantages over starting from scratch with every box.

Leave a comment

Filed under DevOps, Uncategorized

Agile Operations

It’s funny.  When we recently started working on an upgrade of our Intranet social media platform, and we were trying to figure out how to meld the infrastructure-change-heavy operation with the need for devs, designers, and testers to be able to start working on the system before “three months from now,” we broached the idea of “maybe we should do that in iterations!”  First, get the new wiki up and working.  Then, worry about tuning, switching the back end database, etc.  Very basic, but it got me thinking about the problem in terms of “hey, Infrastructure still operates in terms of waterfall, don’t we.”

Then when Peco and I moved over to NI R&D and started working on cloud-based systems, we quickly realized the need for our infrastructure to be completely programmable – that is, not manually tweaked and controlled, but run in a completely automated fashion.  Also, since we were two systems guys embedded in a large development org that’s using agile, we were heavily pressured to work in iterations along with them.  This was initially a shock – my default project plan has, in traditional fashion, months worth of evaluating, installing, and configuring various technology components before anything’s up and running.   But as we began to execute in that way, I started to see that no, really, agile is possible for infrastructure work – at least “mostly.”  Technologies like cloud computing help, but there’s still a little more up front work required than with programming – but you can get mostly towards an agile methodology (and mindset!).

Then at OpsCamp last month, we discovered that there’s been this whole Agile Operations/Automated Infrastructure/devops movement thing already in progress we hadn’t heard about.  I don’t keep in touch with The Blogosphere ™ enough I guess.  Anyway, turns out a bunch of other folks have suddenly come to the exact same conclusion and there’s exciting work going on re: how to make operations agile, automate infrastructure, and meld development and ops work.

So if  you also hadn’t been up on this, here’s a roundup of some good related core thoughts on these topics for your reading pleasure!

Leave a comment

Filed under DevOps