Author Archives: Ernest Mueller

Ernest Mueller's avatar

About Ernest Mueller

Ernest is the VP of Engineering at the cloud and DevOps consulting firm Nextira in Austin, TX. More...

Specific DevOps Methodologies?

So given the problem of DevOps being somewhat vague on prescriptions at the moment, let’s look at some options.

One is to simply fit operations into an existing agile process used by the development team.  I read a couple things lately about DevOps using XP (eXtreme Programming) – @mmarschall tweeted this example of someone doing that back in 2001, and recently I saw a great webcast by @LordCope on a large XP/DevOps implementation at a British government agency.

To a degree, that’s what we’ve done here – our shop doesn’t use XP or really even Scrum, more of a custom (sometimes loosely) defined agile process.  We decided for ops we’ll use the same processes and tools – follow their iterations, Greenhopper for agile task management, same bug tracker (based off HP), same spec and testing formats and databases.  Although we do have vestiges of our old “Systems Development Framework,” a more waterfally approach to collaboration and incorporation of systems/ops requirements into development projects.  And should DevOps be tied to agile only, or should it also address what collaboration and automation are possible in a trad/waterfall shop?  Or do we let those folks eat ITIL?

Others have made a blended process, often Scrum + Lean/kanban hybrids with the Lean/kanban part being brought in from the ops side and merged with more of a love for Scrum from the dev side. Though some folks say Scrum and kanban are more in opposition, others posit a combined “Scrumban.”

Or, there’s “create a brand new approach,” but that doesn’t seem like a happy approach to me.

What have people done that works?  What should be one of the first proposed roadmaps for people who want to do DevOps but want more of something to follow than handwaving about hugs?

7 Comments

Filed under DevOps

They’re Taking Our Jobs!

Gartner has released a report with 2011 IT predictions, and one of the things they say is that all this DevOps (they don’t use the word) automation stuff will certainly lead to job cuts – “By 2015, tools and automation will eliminate 25 percent of labor hours associated with IT services “.

That seems like the general “oh any technical innovation will take all our jobs” argument.  Except for factory line workers, it hasn’t been the case – despite no end of technical innovations over the last 30 years, demand for IT has done nothing but increase hugely over time.

Heck, one of the real impediments to DevOps is that most larger shops are so massively underinvested in ops that there’s no way for ops teams to meaningfully collaborate on projects with devs – 100 devs with 50 live projects working with a 5 person ops team, how can you bring value besides at a very generic level?  I see automation as a necessary step to minimize busywork to allow ops to more successfully engage with the dev teams and bring their expertise to actual individual efforts.

They act like there’s a bunch of shops out there that employ 100 mostly unskilled guys that just wander around and move files around all day, and were it not for the need to kickstart Linux would be selling oranges on the roadside. That’s not the case anywhere I’ve ever been.

Did we need fewer programmers as we moved from assembly to C to Java because of a resulting reduction in labor hours?  Hell no. Maybe one day, decades from now, IT will be a zero growth industry and we’ll have to worry about efficiency innovations cutting jobs.  But that time certainly isn’t now, and generally I would expect Gartner to be in touch with the industry enough to understand that.

4 Comments

Filed under DevOps

salesforce.com – Time To Start Caring?

So this week, salesforce.com (the world’s #4 fastest growing company, says Fortune Magazine) bought Ruby PaaS shop Heroku, announced database.com as a cloud database solution, and announced remedyForce, IT config management from BMC.

That’s quite the hat trick. salesforce.com has been the 900 lb gorilla in the closet for a while now; they’ve been hugely successful and have put a lot of good innovation on their systems but so far force.com, their PaaS solution, has been militantly “for existing CRM customers.” This seems like an indication of preparation to move into the general PaaS market and if they do I think they’ll be a force to reckon with – experience, money, and a proven track record of innovation. NI doesn’t use salesforce.com (“Too expensive” I’m told) so I’ve kept them on the back burner in terms of my attention but I’m guessing in 2011 they will come into the pretty meager PaaS space and really kick some ass.

Because for PaaS – what do we have really? Google App Engine, and in traditional Google fashion they pooped it out, put some minimum functionality on it, and wandered off. (We tried it once, determined that it disallowed half the libraries we needed to use, and gave up.) Microsoft Azure, which is really a hybrid IaaS/PaaS, you don’t get to ignore the virtual server aspect and have to do infrastructure stuff like monitor, scale instances, etc. yourself. And of course Heroku. And VMWare’s Cloud Foundry thing for Java, but VMWare is having a bizarrely hard time doing anything right in the cloud – talk about parlaying leadership in a nearby sector unsuccessfully. I have no idea why they’re executing so slowly, but they are.  Even that seems like Salesforce is doing it better, with VMForce (salesforce + vmware collaboration on Java PaaS).

In the end, most of us want PaaS – managing the plumbing is uninteresting, as long as it can manage itself well – but it’s hard, and no one has it nailed yet.  I hope salesforce.com does move into the general-use space; I’d hate for them to be buying up good players and only using them to advance their traditional business.

Quite the hat trick. salesforce.com has been the 900 lb gorilla in the closet for a while now; they’ve been hugely successful and have put a lot of good innovation on their systems but so far have been militantly “for existing CRM customers.” This seems like an indication of preparation to move into the general PaaS market and if they do I think they’ll be a force to reckon with – experience, money, and a proven track record of innovation. NI doesn’t use sf.com (“Too expensive” I’m told) so I’ve kept them on the back burner in terms of my attention but I’m guessing in 2011 they will come into the pretty meager PaaS space and really kick some ass.

Because for PaaS – what do we have really? Google App Engine, and in traditional Google fashion they pooped it out, put some minimum functionality on it, and wandered off. Azure, which is really hybrid IaaS, you don’t get to ignore the virtual server aspect and have to monitor, scale instances, etc. yourself. And of course Heroku. And VMWare’s Cloud Foundry thing for Java, but VMWare is having a bizarrely hard time doing anything right in the cloud – talk about parlaying leadership in a nearby sector unsuccessfully. Even that seems like Salesforce is doing it better, with VMForce (salesforce + vmware collab).

In the end, most of us want PaaS – managing the plumbing is uninteresting, as long as it can manage itself well – but it’s hard, and no one has it nailed yet.

Leave a comment

Filed under Cloud

My Take On The DevOps State of the Union

DevOps has been a great success in that all the core people that are engaged with the problem really ‘get’  it and are mainly on the same page (it’s culture shift towards agile collaboration between Ops and Dev), and have gotten the word out pretty well.

But there’s one main problem that I’ve come to understand recently – it’s still too high level for the average person to really get into.

At the last Austin Cloud User Group meeting, Chris Hilton from Thoughtworks delivered a good presentation on DevOps. Afterwards I polled the room – “Who has heard about DevOps before today?” 80% of the 30-40 people there raised their hands.  “Nice!” I thought.  “OK, how many of you are practicing DevOps?”  No hands went up – in fact, even the people I brought with me from our clearly DevOps team at NI were hesitant.

Why is that?  Well, in discussing with the group, they all see the value of DevOps, and kinda get it.  They’re not against DevOps, and they want to get there. But they’re not sure *how* to do it because of how vague it is.  When am I doing it?  If I go have lunch with my sysadmin, am I suddenly DevOps?  The problem is, IMO, that we need to push past the top level definition and get to some specific methodologies people can hang their hats on.

Agile development had this same problem.  And you can tell by the early complaints about agile when it was just in the manifesto stage.  “Well that’s just doing what we’ve always done, if you’re doing it right!”

But agile dev pressed past that quickly.  They put out their twelve principles, which served as some marching orders for people to actually implement. Then, they developed more specific methodologies like Scrum that gave people a more comprehensive plan as to what to do to be agile.  Yes, the success of those depends on buyin at that larger, conceptual, culture level – but just making culture statements is simply projecting wishful thinking.

To get DevOps to spread past the people that have enough experience that they instinctively “get it,” to move it from the architects to the line workers, we need more prescription.  Starting with a twelve principles kind of thing and moving into specific methodologies.  Yes yes, “people over process over tools” – but managers have been telling people “you should collaborate more!” for twenty years.  What is needed is guidance on how to get there.

Agile itself has gotten sufficient definition that if you ask a crowd of developers if they are doing agile, they know if they are or not (or if they’re doing it kinda half-assed and so do the slow half-raise of the hand).  We need to get DevOps to that same place.  I find very few people (and even fewer who are at all informed) that disagree with the goals or results of DevOps – it’s more about confusion about what it is and how someone gets there that causes Joe Dev or Joe Operator to fret.  Aimlessness begets inertia.

You can’t say “we need culture change” and then just kick back and keep saying it and expect people to change their culture.  That’s not the way the world works.  You need specific prescriptive steps. 90% of people are followers, and they’ll do DevOps as happily as they’ve done agile, but they need a map to follow to get there.

3 Comments

Filed under DevOps

DevOps State of the Union

I hope anyone reading this blog is following the great series of essays on Agile Web Operations about DevOps from all the luminaries in the field.

Spliffs and Submarines: The Two Cultures and the State of Devops by John Arundel talks about the collaborative heart of DevOps and how to bridge the different cultures of the dev and ops worlds.

Is DevOps Being Hijacked by Technologists? by Lindsay Holmwood also says it’s about communication and collaboration, and wonders if it’s not getting turned into tool showcase hour.

DevOps: State of the Nation by Chris Read talks about the history of DevOps and reiterates what the other posts are saying – here’s a great quote:

DevOps is at best vaguely-defined and at worst simply a sales placeholder for pitching operations-related products and services. The reason for this is that DevOps is primarily a cultural and organizational shift, rather than a set of practices, tools or techniques.

My working definition is: DevOps is the integration of Agile principles with Operations practices.

What DevOps Is Not by R.I. Pienaar goes after some of the myths, like “it’s what we’ve always done” and “it means getting rid of sysadmins right?”
And there’s more coming!
The thing I think is the most striking about all the essays is how similar they are.  All the main guys are looking at the problem and saying “You know… It’s culture.  It’s about collaboration and communication.  Effectively it’s bringing Ops into the agile team. Automation tools are nice but let’s not get distracted by them.”

Leave a comment

Filed under DevOps

Austin Cloud User Group Nov 17 Meeting Notes

This month’s ACUG meeting was cooler than usual – instead of having one speaker talk on a cloud-related topic, we had multiple group members do short presentation on what they’re actually doing in the cloud.  I love talks like that, it’s where you get real rubber meets the road takeaways.

I thought I’d share my notes on the presentations.  I’ll write up the one we did separately, but I got a lot out of these:

  1. OData to the Cloud, by Craig Vyal from Pervasive Software
  2. Moving your SaaS from Colo to Cloud, by Josh Arnold from Arnold Marziani (previously of PeopleAdmin)
  3. DevOps and the Cloud, by Chris Hilton from Thoughtworks
  4. Moving Software from On Premise to SaaS, by John Mikula from Pervasive Software
  5. The Programmable Infrastructure Environment, by Peco Karayanev and Ernest Mueller from National Instruments (see next post!)

My editorial comments are in italics.  Slides are linked into the headers where available.

OData to the Cloud

OData was started by Microsoft (“but don’t hold that against it”) under the Open Specification Promise.  Craig did an implementation of it at Pervasive.

It’s a RESTful protocol for CRUDdy GET/POST/DELETE of data.  Uses AtomPub-based feeds and returns XML or JSON.  You get the schema and the data in the result.

You can create an OData producer of a data source, consume OData from places that support it, and view it via stuff like iPhone/Android apps.

Current producers – Sharepoint, SQL Azure, Netflix, eBay, twitpic, Open Gov’t Data Initiative, Stack Overflow

Current consumers – PowerPivot in Excel, Sesame, Tableau.  Libraries for Java (OData4J), .NET 4.0/Silverlight 4, OData SDK for PHP

It is easier for “business user” to consume than SOAP or REST.  Craig used OData4J to create a producer for the Pervasive product.

Questions from the crowd:

Compression/caching?  Nothing built in.  Though normal HTTP level compression would work I’d think. It does “page” long lists of results and can send a section of n results at a time.

Auth? Your problem.  Some people use Oauth.  He wrote a custom glassfish basic HTTP auth portal.

Competition?  Gdata is kinda like this.

Seems to me it’s one part REST, one part “making you have a DTD for your XML”.  Which is good!  We’re very interested in OData for our data centric services coming up.

Moving your SaaS from Colo to Cloud

Josh Arnold was from PeopleAdmin, now he’s a tech recruiter, but can speak to what they did before he left.  PeopleAdmin was a Sungard type colo setup.  Had a “rotting” out of country DR site.

They were rewriting their stack from java/mssql to ruby/linux.

At the time they were spending $15k/mo on the colo (not including the cost of their HW).  Amazon estimated cost was 1/3 that but really they found out after moving it’s 1/2.  What was the surprise cost?  Lower than expected perf (disk io) forced more instances than physical boxes of equivalent “size.”

Flexible provisioning and autoscaling was great, the colo couldn’t scale fast enough.  How do you scale?

The cloud made having an out of country DR site easy, and not have it rot and get old.

Question: What did you lose in the move?  We were prepared for mental “control issues” so didn’t have those.  There’s definitely advanced functionality (e.g. with firewalls) and native hardware performance you lose, but that wasn’t much.

They evalled Rackspace and Amazon (cursory eval).  They had some F5s they wanted to use and the ability to mix in real hardware was tempting but they mainly went straight to Amazon.  Drivers were the community around it and their leadership in the space.

Timeline was 2 years (rewrite app, slowly migrate customers).  It’ll be more like 3-4 before it’s done.  There were issues where they were glad they didn’t mass migrate everyone at once.

Technical challenges:

Performance was a little lax (disk performance, they think) and they ended up needing more servers.  Used tricks like RAIDed EBSes to try to get the most io they could (mainly for the databases).

Every customer had a SSL cert, and they had 600 of them to mess with.  That was a problem because of the 5 Elastic IP limit.  Went to certs that allow subsidiary domains – Digicert allowed 100 per cert (other CAs limit to much less) so they could get 100 per IP.

App servers did outbound LDAP conns to customer premise for auth integration and they usually tried to allow those in via IP rules in their corporate firewalls, but now on Amazon outbound IPs are dynamic.  They set up a proxy with a static (elastic) Ip to route all that through.

Rightscale – they used it.  They like it.

They used nginx for the load balancing, SSL termination.  Was a single point of failure though.

Remember that many of the implementations you are hearing about now were started back before Rackspace had an API, before Amazon had load balancers, etc.

In discussion about hybrid clouds, the point was brought up a lot of providers talk about it – gogrid, opsource, rackspace – but often there are gotchas.

DevOps and the Cloud

Chris Hilton from Thoughtworks is all about the DevOps, and works on stuff like continuous deployment for a living.

DevOps is:

  • collaboration between devs and operations staff
  • agile sysadmin, using agile dev tools
  • dev/ops/qa integration to achieve business goals

Why DevOps?

Silos.  agile dev broke down the wall between dev/qa (and biz).

devs are usually incentivized for change, and ops are incentivized for stability, which creates an innate conflict.

but if both are incentivized to deliver business value instead…

DevOps Practices

  • version control!
  • automated provisioning and deployment (Puppet/chef/rPath)
  • self healing
  • monitoring infra and apps
  • identical environments dev/test/prod
  • automated db mgmt

Why DevOps In The Cloud?

cloud requires automation, devops provides automation

References

  • “Continuous Delivery” Humble and Farley
  • Rapid and Reliable Releases InfoQ
  • Refactoring Databases by Ambler and Sadalage

Another tidbit: they’re writing puppet lite in powershell to fill the tool gap – some tool suppliers are starting, but the general degree of tool support for people who use both Windows and Linux is shameful.

Moving Software from On Premise to SaaS

John Mikula of Pervasive tells us about the Pervasive Data Cloud.  They wanted to take their on premise “Data Integrator” product, basically a command line tool ($, devs needed to implement), to a wider audience.

Started 4 years ago.  They realized that the data sources they’re connecting to and pumping to, like Quickbooks Online, Salesforce, etc are all SaaS from the get go.   “Well, let’s make our middle part the same!”

They wrote a Java EE wrapper, put it on Rackspace colo initally.

It gets a customer’s metadata, puts it on a queue, another system takes it off and process it.  A very scaling-friendly architecture.  And Rackspace (colo) wasn’t scaling fast enough, so they moved it to Amazon.

Their initial system had 2 glassfish front ends, 25 workers

For queuing, they tried Amazon SQS but it was limited, then went to Apache Zookeeper

First effort was about “deploy a single app” – namely salesforce/quickbooks integration.  Then they made a domain specific model and refactored and made an API to manage the domain specific entities so new apps could be created easily.

Recommended approach – solve easy problems and work from there.  That’s more than enough for people to buy in.

Their core engine’s not designed for multitenancy – have batches of workers for one guy’s code – so their code can be unsafe but it’s in its own bucket and doesn’t mess up anyone else.

Changing internal business processes in a mature company was a challenge – moving from perm license model to per month just with accounting and whatnot was a big long hairy deal.

Making the API was rough.  His estimate of a couple months grew to 6.  Requirements gathering was a problem, very iterative.  They weren’t agile enough – they only had one interim release and it wasn’t really usable; if they did it again they’d do the agile ‘right thing’ of putting out usable milestones more frequently to see what worked and what people really needed.

In Closing

Whew!  I found all the presentations really engaging and thank everyone for sharing the nuts and bolts of how they did it!

Leave a comment

Filed under Cloud

Our First Cloud Product Released!

Hey all, I just wanted to take a moment to share with you that our first cloud-based product just went live!  LabVIEW Web UI Builder is National Instruments’ first SaaS application.  It’s actually free to use, go to ni.com and “Try It Now”, all you have to do is make an account.  It’s a freemium model, so you can use it, save your code, run it, etc. all you want; we charge to get the “Build & Deploy” functionality that enables you to compile, download, and deploy the bundled app to an embedded device or whatnot.

Essentially it’s a Silverlight app (can be installed out of browser on your box or just launched off the site) that lets you graphically program test & measurement, control, and simulation type of programs.  You can save your programs to the cloud or locally to your own machine.  The programs can interact via Web services with anything, but in our case it’s especially interesting when they interact with data acquisition devices.  There’s some sample programs on that page that show what can be done, though those are definitely tuned to engineers…  We have apps internally that let you play frogger and duck hunt, or do the usual Web mashup kinds of things calling google maps apis.  So feel free and try out some graphical programming!

Cool technology we used to do this:

And it’s 100% DevOps powered.  Our implementation team consists of developers and sysadmins, and we built the whole thing using an agile development methodology.  All our systems are created by model-driven automation from assets and definitions in source control.  We’ll post more about the specifics now that we’ve gotten version 1 done!  (Of course, the next product is just about ready too…)

Leave a comment

Filed under Cloud, DevOps

LASCON 2010: Why The Cloud Is More Secure Than Your Existing Systems

Why The Cloud Is More Secure Than Your Existing Systems

Saving the best of LASCON 2010 for last, my final session was the one I gave!  It was on cloud security, and is called “Why The Cloud Is More Secure Than Your Existing Systems.”  A daring title, I know.

You can read the slides (sadly, the animations don’t come through so some bits may not make sense…).  In general my premise is that people that worry about cloud security need to compare it to what they can actually do themselves.  Mocking a cloud provider’s data center for not being ISO 27001 compliant or having a two hour outage only makes sense if YOUR data center IS compliant and if your IT systems’ uptime is actually higher than that.  Too much of the discussion is about the FUD and not the reality.  Security guys have this picture in their mind of a super whizbang secure system and judge the cloud against that, even though the real security in the actual organization they work at is much less.  I illustrate this with ways in which our cloud systems are beating our IT systems in terms of availablity, DR, etc.

The cloud can give small to medium businesses – you know, the guys that form 99% of the business landscape – security features that heretofore were reserved for people with huge money and lots of staff.  Used to be, if you couldn’t pay $100k for Fortify, for instance, you just couldn’t do source code security scanning.  “Proper security” therefore has an about $1M entry fee, which of course means it’s only for billion dollar companies.  But now, given the cloud providers’ features, and new security as a service offerings, more vigorous security is within reach of more people.  And that’s great -building on the messages in previous sessions from Matt’s keynote and Homeland Security’s talk, we need pervasive security for ALL, not just for the biggest.

There’s more great stuff in there, so go check it out.

1 Comment

Filed under Cloud, Conferences, Security

LASCON 2010: HTTPS Can Byte Me

HTTPS Can Byte Me

This paper on the security problems of HTTPS was already presented at Black Hat 2010 by Robert Hansen, aka “RSnake”, of SecTheory and Josh Sokol of our own National Instruments.

This was a very technical talk so I’m not going to try to reproduce it all for you here.  Read the white paper and slides.  But basically there are a lot of things about how the Web works that makes HTTPS somewhat defeatable.

First, there are insecure redirects, DNS lookups, etc. before you ever get to a “secure” connection.  But even after that you can do a lot of hacking from traffic characterization – premapping sites, watching “encrypted” traffic and seeing patterns in size, get vs post, etc.  A lot of the discussion was around doing  things like making a user precache content to remove noisiness via a side channel (like a tab; browsers don’t segment tabs).  Anyway, there’s a lot of middle ground between “You can read all the traffic” and “The traffic is totally obscured to you,” and it’s that middle ground that it can be profitable to play in.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Tell Me Your IP And I’ll Tell You Who You Are

Tell Me Your IP And I’ll Tell You Who You Are

Noa Bar-Yosef from Imperva talked about using IP addresses to identify attackers – it’s not as old and busted as you may think.  She argues that it is still useful to apply IP intelligence to security problems.

Industrialized hacking is a $1T business, not to mention competitive hacking/insiders, corporate espionage…  There’s bad people trying to get at you.

“Look at the IP address” has gotten to where it’s not considered useful, due to pooling from ISPs, masquerading, hopping… You certainly can’t use them to prove in court who someone is.

But… home users’ IPs persist 65% more than a day, 15% persist more than a week.  A lot of folks don’t go through aggregators, and not all hopping matters (the new IP is still in the same general location).  So the new “IP Intelligence” consists of gathering info, analyzing it, and using it intelligently.

Inherent info an IP gives you – its type of allocation, ownership, and geolocation.  You can apply reputation-based analytics to them usefully.

Geolocation can give context – you can restrict IPs by location, sure, but also it can provide “why are they hitting that” fraud detection.  Are hits from unusual locations, simultaneous from different locations,  or from places really different from what the account’s information would indicate?  Maybe you can’t block on them – but you can influence fuzzy decisions.  Flag for analysis. Trigger adaptive authentication or reduced functionality.

Dynamically allocated addresses aren’t aggregators, and 96% of spam comes from them.

Thwart masquerading – know the relays, blacklist them.  Check accept-language headers, response time, path…  Services provide “naughty” lists of bad IPs – also, whitelists of good guys.  Use realtime blacklist feeds (updated hourly).

Geolocation data can be obtained as a service (Quova) or database (Maxmind). Reputation data is somewhat fragmented by “spammer” or whatnot, and is available from various suppliers (who?)

I had to bail at this point unfortunately…  But in general a sound premise, that intel from IPs is still useful and can be used in a general if not specific sense.

Leave a comment

Filed under Conferences, Security