Monthly Archives: December 2010

The Rise of the Security Industry

In late 2007 Bruce Schneier, the internationally renowned security technologist and author, wrote an article for IEEE Security & Privacy. The ominously named article: The Death of the Security Industry predicted the future of the security industry or lack thereof.  In it he predicts that we would treat security as merely a utility like we use water and power today.  The future is one where “large IT departments don’t really want to deal with network security. They want to fly airplanes, produce pharmaceuticals, manage financial accounts, or just focus on their core business.”

Schneier closes with, “[a]s IT fades into the background and becomes just another utility, users will simply expect it to work. The details of how it works won’t matter.”

Looking back 3 years and having the luxury of hindsight, it is understandable to see why he thought the security industry would become a utility.  In part, it has become true.  Utility billing is the rage for infrastructure (hello cloud computing) and more and more people are viewing the network as a commodity.  Bandwidth has increased in performance and decreased in cost.  Continually people are outsourcing pieces of their infrastructure and non-critical IT services to vendors or to offshore employees.

But there are three reasons why I disagree with the The Death of the Security Industry and I believe we are actually going to see a renaissance of the security industry over the next decade.

1. Data is valuable. We can’t think of IT as merely the computers and network resources we use.  We need to put the ‘I’ back in IT and remember why we play this game in the first place.  Information.  Protecting the information (data) will be crucial over the long haul.  Organizations do not care about new firewalls or identity management as a primary goal, however they do care about their data.  Data is king.  Organizations that succeed will be ones that master navigating a new marketplace that values sharing while keeping their competitive edge by safe-guarding and protecting their critical data.

2. Security is a timeless profession. When God gave Adam and Eve the boot from the Garden of Eden, what did he  do next?   He used a security guard to keep them out of the Garden for good.  Security has been practiced as long as people have been people.  As long as you have something worth protecting (see ‘data is valuable’ in point 1) you will need resources to protect it.   Our valuable data is being transferred, accessed and modified on computing devices and will need to be protected.  If people can’t trust that their data is safe then they will not be our customers.  The CIA security triad (Confidentiality, Integrity, and Availability) needs to remain in tact for consumers to trust organizations with their data and if that data has any value to the organization, it will be need to be protected.

3. Stuxnet. This could be called the dawn of a new age of hacking.  Gone are the days of teenagers running port scans from their garages. Be ready to start seeing hackers using sophisticated techniques that simultaneously attack multiple vectors to gain access on their targets.  I am not going to spread FUD (Fear Uncertainty and Doubt) around, but I believe that Stuxnet is just the beginning.

In addition to how Stuxnet was executed, it is just as interesting to see what was attacked.  This next decade will prove to be a change in the type of targets attacked.  In the 80’s it was all about hacking phones and more physical targets, the 90’s were the days of the port-scanning and Microsoft Windows hacking, the last decade has primarily focused on web and application data.  With Stuxnet, we are seeing the revitalization of hacking where it is returning to its roots of hacking targets that are physical in nature such as SCADA systems that control a building’s temperature systems.  The magazine 2600 has been publishing a series on SCADA hacking over the last 18 months.  What makes it even more interesting is that almost every device you buy these days has a web interface on it, so never fear, the last 10 years spent hacking websites will come in real handy when looking at hacking control systems.

In closing, I think we are a long way off from seeing the death of the security industry.  As our data becomes more valuable, the more we will need to secure.  Data is on the rise and with it comes the need for security.  Additionally as more and more of our world is controlled with computers, the targets become more and more interesting.  Be ready for the rise of the security industry.

Let me know what you think on twitter: @wickett

1 Comment

Filed under Security

Quora vs StackExchange

As a Web ops guy, I’ve used the Stack Exchange sites, especially Server Fault, a lot.  They’re the best place to go do technical Q&A without having to go immerse yourself in a specific piece of open source software and determine what bizarre way you’re supposed to get support on it (often a crufty forum, mailing list with bizarre culture, or an IRC channel).

However, they have started to, in my opinion, come apart at the seams. They started with the “holy trinity” of Stack Overflow (coders), Server Fault (admins), and Super User (users).  But lately they have expanded their scope to sites for non-techie areas, but have also started to fragment the technical areas.  So now if I have a question, I am confronted with separate communities for Server Fault, Linux & UNIX, Ubuntu, and more. Or even worse, Stack Exchange vs. “Programmers” vs language specific lists. This basically heavily segments the population and leads to the same problems that the weird little insular mailing lists have. It makes me use the SEs a lot less. I don’t want to have to somehow engage with 10 different communities to answer my everyday questions (and I sure as hell am not going to follow 10 to answer questions), so in my opinion they are cannibalizing their success and it will implode of its own weight and become no better than any Internet forum site.

Recently I started seeing tweets about Quora from @scobleizer, and it said stuff like “is it the future of blogging?” and was being pitched as some twitter-blog hybrid which of course caused me to ignore it.  But then it started getting a lot more activity and I thought I’d go check it out.  But if you go to the Quora page, you can’t see anything without logging in.  And of course if you log in with Twitter or Facebook it wants “everything from you and your friends, ever.”  So I wandered off again.

Finally I gave in and went over and logged in, and it’s actually pretty neat – it’s Q&A, like Stack Exchange, but instead of segmentation into different sub-communities, it uses the typical tag/follow/etc. Web 2.0 paradigm. So “Stack Exchange plus Twitter” is probably the best analogy. Now on the one hand that more unmanaged approach runs the risk of becoming like “Yahoo! Answers” – utter crap, full of unanswered questions and spammers and psychos – but on the other hand, I like my topics not being pushed down into little boxes where you can’t get an answer without mastering the arcane rules of that community (like the hateful Cygwin mailing list, where the majority of new posters are chased off with bizarre acronyms telling them they are using email wrong).  The simple addition of up/down voting is 80% of the value of what SE gives over forums, so will that carry the day?

Now maybe it’s because they’re having capacity problems, but the biggest problem with Quora IMO is that you don’t get to see any of it when you go there until you log in and give them access to all your networks and whatnot, which I find obnoxious. But if they fix that, then I think given the harmful direction SE is going, it may be the next big answer for Q&A.

2 Comments

Filed under General

OPSEC + Agile = Security that works

Recently I have been reading on OPSEC (operations security).  OPSEC, among many things, is a process for security critical information and reducing risk.  The 5 steps in the OPSEC process read as follows:

  1. Identify Critical Information
  2. Analyze the Threat
  3. Analyze the Vulnerabilities
  4. Assess the Risk
  5. Apply the countermeasures

It really isn’t rocket science, but it is the sheer simplicity of the process that is alluring.  It has traditionally been applied in the military and has been used as a meta-discipline in security.  It assumes that other parties are watching, sort of like the aircraft watchers that park near the military base to see what is flying in and out, or the Domino’s near the Pentagon that reportedly sees a spike in deliveries to the Pentagon before a big military strike.  Observers are gathering critical information on your organization in new ways that you weren’t able to predict.  This is where OPSEC comes in.

Since there is no way to predict what data will be leaking from your organization in the future and it is equally impossible to enumerate all possible future risk scenarios, then it becomes necessary to perform this assessment regularly.  Instead of using an annual review process with huge overhead and little impact (I am looking at you, Sarbanes-Oxley compliance auditors), you can create a process to continue to identify risks in an ever-changing organization while lessening risk.  This is why you have a security team, right?  Lessening the risk to the organization is the main reason to have a security team.  Achieving PCI or HIPPA compliance is not.

Using OPSEC as a security process poses huge benefits when aligned with Agile software development principles.  The following weekly assessment cycle is promoted by SANS in their security training course.  See if you can see Agile in it.

The weekly OPSEC assessment cycle:

  1. Identify Critical Information
  2. Assess threats and threat sources including: employees, contractors, competitors, prospects…
  3. Assess vulnerabilities of critical information to the threat
  4. Conduct risk vs. benefit analysis
  5. Implement appropriate countermeasures
  6. Do it again next week.

A weekly OPSEC process is a different paradigm from the annual compliance ritual.  The key of the security is just that: lessen risk to the organization.   Iterating through the OPSEC assessment cycle weeklymeans that you are taking frequent and concrete steps to facilitate that end.

Leave a comment

Filed under Security

JClouds “State of DevOps”

Hey, looks like I got quoted in Adrian Cole’s new “State of DevOps” presentation.  Some quick thoughts on current state and futures of DevOps from a bunch of important DevOps people and then little old me.

Leave a comment

Filed under DevOps

DevOps State of the Union Part 2

I gave my thoughts on the first bunch of the essays in the great State of DevOps series on Agile Web Operations. And more are coming, so here’s my roundup!

DevOps: Cleans and Shines Without Harsh Scratching by Julian “The Build Doctor” Simpson is a little bit of history, a little bit of prediction, and a little bit of dirty-sounding British phrases like “Discovering jam at the bunfight.”

DevOps: These Soft Parts by John Allspaw reminds us that the core “soft” skills of communication and collaboration lie at the heart of DevOps practice.

The State of DevOps by James Turnbull talks about a number of things that resonate specifically with me.  The three principles that the mainframe guys taught him.  The fact that sure, the core “super ops” people are already doing this but the vast masses aren’t. And that devs and ops both have a lot to learn from one another.

The Implications of Infrastructure as Code is the really meaty one, by Stephen Nelson-Smith. This is one of the best of the batch, it’s full of highly specific best practices derived from combining development with operations using Extreme Programming.  Take your time and read this – if you are working on implementing DevOps in your shop, this one’s the essay that you need to be taking notes from to build into your own processes.  Bang up job!

Props to Matthias Marschall for putting this series together; it’s really great to hear the different takes on this emerging area!  There’s two more coming, so I’ll be staying tuned.  If you haven’t been following these, please go read them all!

 

Leave a comment

Filed under DevOps

Web Operations Standing Orders

I was just reading the newest “state of DevOps” post on Agile Web Operations by James Turnbull and he mentions a set of rules some mainframe guys taught him back in the day:

  • Think before you act
  • Change one thing at a time not ten things at once
  • If you fucked up then own up

This reminded me of the standing orders I had as manager of our Web Ops shop here for many years.  Mine were:

Web Operations Standing Orders

  1. Make it happen
  2. Don’t fuck up
  3. There’s the right way, the wrong way, and the standard way

This was my trenchant way of summing up the proper role of an innovative Web operations person.They were in priority order, like Asimov’s Three Laws of Robotics – #1 is higher priority than #2, for example. I told people “when in doubt as to how to proceed on a given issue – always refer back to the Standing Orders for guidance.”

First, make it happen. Your job is to be active and to get stuff out the door, and to help the developers accomplish their goals, NOT to be the “Lurker at the Threshold” and block attempts at accomplishing business value.  Sure, we were often in the position of having to convince teams to balance performance, availability, and security against whatever brilliant idea they pooped out that day, but priority #1 was to find ways to make things happen, not find ways to make them not happen – which seems to be the approach of many a sysadmin. In the end, we’re all here to get things done and create maximal business value, and though there is the rare time when ‘don’t do anything’ is the path to that – I would be willing to say that 99% of the time, it’s not.

Second, don’t fuck up.  All of Turnbull’s mainframe-guy points elaborate on this core value.  (I imagine most of the friction between him and them was that they didn’t share Standing Order 1 as a core value.) As a Web operations person, you have a lot of rope to hang everyone with – you control the running system.  If you decide to cowboy something without testing it in dev, that’s bad.  For right or wrong, developers mess up code all the time but operations folks are expected to be perfect and not make mistakes that affect production. So be perfect. Test, double check, use the electronic equivalent of OSHA ‘red tags”… Be careful.

And finally, there’s the right way, the wrong way, and the standard way.  (Unstated but implied is “do it the standard way.”) Innovation is great, but any innovation (a “better” or “more right” way) has to be folded back via documentation, automation, etc. to become the standard way. If there’s a documented process on how to build a system, you follow that process to the letter, I don’t care how experienced you are, or if your way is “better” for whatever your personal definition of “better” is.  If the process seems “wrong,” then either a) there’s some subtle reason it’s done that way you need to figure out, or b) congratulations you can improve the standard procedure everyone uses and you’ve bettered the human race.
As a side note, there are explicit standards and implicit standards.  If you go into a directory and see a lot of rolled log files called “file.YYYYMMDD”, and you decide to manually make one, for God’s sake don’t call it “file.MM-DD-YY.”  You never know what automated process may be consuming that, or at least you’re pissing someone off later when they do a “ls”.  Have standardization as a core value in your work.

If I were to sum them up in one word, the three item list basically reduces to three core values:

  1. Be Agile
  2. Be Rugged
  3. Be Standard

But of course it’s not Web Ops if you don’t have some salty language in there. And I fended off attempts over the years to add various options as a #4, mainly from Peco.  “4. Profit” as an homage to the Underpants Gnomes was probably the leading contender.

Leave a comment

Filed under DevOps

Specific DevOps Methodologies?

So given the problem of DevOps being somewhat vague on prescriptions at the moment, let’s look at some options.

One is to simply fit operations into an existing agile process used by the development team.  I read a couple things lately about DevOps using XP (eXtreme Programming) – @mmarschall tweeted this example of someone doing that back in 2001, and recently I saw a great webcast by @LordCope on a large XP/DevOps implementation at a British government agency.

To a degree, that’s what we’ve done here – our shop doesn’t use XP or really even Scrum, more of a custom (sometimes loosely) defined agile process.  We decided for ops we’ll use the same processes and tools – follow their iterations, Greenhopper for agile task management, same bug tracker (based off HP), same spec and testing formats and databases.  Although we do have vestiges of our old “Systems Development Framework,” a more waterfally approach to collaboration and incorporation of systems/ops requirements into development projects.  And should DevOps be tied to agile only, or should it also address what collaboration and automation are possible in a trad/waterfall shop?  Or do we let those folks eat ITIL?

Others have made a blended process, often Scrum + Lean/kanban hybrids with the Lean/kanban part being brought in from the ops side and merged with more of a love for Scrum from the dev side. Though some folks say Scrum and kanban are more in opposition, others posit a combined “Scrumban.”

Or, there’s “create a brand new approach,” but that doesn’t seem like a happy approach to me.

What have people done that works?  What should be one of the first proposed roadmaps for people who want to do DevOps but want more of something to follow than handwaving about hugs?

7 Comments

Filed under DevOps

They’re Taking Our Jobs!

Gartner has released a report with 2011 IT predictions, and one of the things they say is that all this DevOps (they don’t use the word) automation stuff will certainly lead to job cuts – “By 2015, tools and automation will eliminate 25 percent of labor hours associated with IT services “.

That seems like the general “oh any technical innovation will take all our jobs” argument.  Except for factory line workers, it hasn’t been the case – despite no end of technical innovations over the last 30 years, demand for IT has done nothing but increase hugely over time.

Heck, one of the real impediments to DevOps is that most larger shops are so massively underinvested in ops that there’s no way for ops teams to meaningfully collaborate on projects with devs – 100 devs with 50 live projects working with a 5 person ops team, how can you bring value besides at a very generic level?  I see automation as a necessary step to minimize busywork to allow ops to more successfully engage with the dev teams and bring their expertise to actual individual efforts.

They act like there’s a bunch of shops out there that employ 100 mostly unskilled guys that just wander around and move files around all day, and were it not for the need to kickstart Linux would be selling oranges on the roadside. That’s not the case anywhere I’ve ever been.

Did we need fewer programmers as we moved from assembly to C to Java because of a resulting reduction in labor hours?  Hell no. Maybe one day, decades from now, IT will be a zero growth industry and we’ll have to worry about efficiency innovations cutting jobs.  But that time certainly isn’t now, and generally I would expect Gartner to be in touch with the industry enough to understand that.

4 Comments

Filed under DevOps

salesforce.com – Time To Start Caring?

So this week, salesforce.com (the world’s #4 fastest growing company, says Fortune Magazine) bought Ruby PaaS shop Heroku, announced database.com as a cloud database solution, and announced remedyForce, IT config management from BMC.

That’s quite the hat trick. salesforce.com has been the 900 lb gorilla in the closet for a while now; they’ve been hugely successful and have put a lot of good innovation on their systems but so far force.com, their PaaS solution, has been militantly “for existing CRM customers.” This seems like an indication of preparation to move into the general PaaS market and if they do I think they’ll be a force to reckon with – experience, money, and a proven track record of innovation. NI doesn’t use salesforce.com (“Too expensive” I’m told) so I’ve kept them on the back burner in terms of my attention but I’m guessing in 2011 they will come into the pretty meager PaaS space and really kick some ass.

Because for PaaS – what do we have really? Google App Engine, and in traditional Google fashion they pooped it out, put some minimum functionality on it, and wandered off. (We tried it once, determined that it disallowed half the libraries we needed to use, and gave up.) Microsoft Azure, which is really a hybrid IaaS/PaaS, you don’t get to ignore the virtual server aspect and have to do infrastructure stuff like monitor, scale instances, etc. yourself. And of course Heroku. And VMWare’s Cloud Foundry thing for Java, but VMWare is having a bizarrely hard time doing anything right in the cloud – talk about parlaying leadership in a nearby sector unsuccessfully. I have no idea why they’re executing so slowly, but they are.  Even that seems like Salesforce is doing it better, with VMForce (salesforce + vmware collaboration on Java PaaS).

In the end, most of us want PaaS – managing the plumbing is uninteresting, as long as it can manage itself well – but it’s hard, and no one has it nailed yet.  I hope salesforce.com does move into the general-use space; I’d hate for them to be buying up good players and only using them to advance their traditional business.

Quite the hat trick. salesforce.com has been the 900 lb gorilla in the closet for a while now; they’ve been hugely successful and have put a lot of good innovation on their systems but so far have been militantly “for existing CRM customers.” This seems like an indication of preparation to move into the general PaaS market and if they do I think they’ll be a force to reckon with – experience, money, and a proven track record of innovation. NI doesn’t use sf.com (“Too expensive” I’m told) so I’ve kept them on the back burner in terms of my attention but I’m guessing in 2011 they will come into the pretty meager PaaS space and really kick some ass.

Because for PaaS – what do we have really? Google App Engine, and in traditional Google fashion they pooped it out, put some minimum functionality on it, and wandered off. Azure, which is really hybrid IaaS, you don’t get to ignore the virtual server aspect and have to monitor, scale instances, etc. yourself. And of course Heroku. And VMWare’s Cloud Foundry thing for Java, but VMWare is having a bizarrely hard time doing anything right in the cloud – talk about parlaying leadership in a nearby sector unsuccessfully. Even that seems like Salesforce is doing it better, with VMForce (salesforce + vmware collab).

In the end, most of us want PaaS – managing the plumbing is uninteresting, as long as it can manage itself well – but it’s hard, and no one has it nailed yet.

Leave a comment

Filed under Cloud

My Take On The DevOps State of the Union

DevOps has been a great success in that all the core people that are engaged with the problem really ‘get’  it and are mainly on the same page (it’s culture shift towards agile collaboration between Ops and Dev), and have gotten the word out pretty well.

But there’s one main problem that I’ve come to understand recently – it’s still too high level for the average person to really get into.

At the last Austin Cloud User Group meeting, Chris Hilton from Thoughtworks delivered a good presentation on DevOps. Afterwards I polled the room – “Who has heard about DevOps before today?” 80% of the 30-40 people there raised their hands.  “Nice!” I thought.  “OK, how many of you are practicing DevOps?”  No hands went up – in fact, even the people I brought with me from our clearly DevOps team at NI were hesitant.

Why is that?  Well, in discussing with the group, they all see the value of DevOps, and kinda get it.  They’re not against DevOps, and they want to get there. But they’re not sure *how* to do it because of how vague it is.  When am I doing it?  If I go have lunch with my sysadmin, am I suddenly DevOps?  The problem is, IMO, that we need to push past the top level definition and get to some specific methodologies people can hang their hats on.

Agile development had this same problem.  And you can tell by the early complaints about agile when it was just in the manifesto stage.  “Well that’s just doing what we’ve always done, if you’re doing it right!”

But agile dev pressed past that quickly.  They put out their twelve principles, which served as some marching orders for people to actually implement. Then, they developed more specific methodologies like Scrum that gave people a more comprehensive plan as to what to do to be agile.  Yes, the success of those depends on buyin at that larger, conceptual, culture level – but just making culture statements is simply projecting wishful thinking.

To get DevOps to spread past the people that have enough experience that they instinctively “get it,” to move it from the architects to the line workers, we need more prescription.  Starting with a twelve principles kind of thing and moving into specific methodologies.  Yes yes, “people over process over tools” – but managers have been telling people “you should collaborate more!” for twenty years.  What is needed is guidance on how to get there.

Agile itself has gotten sufficient definition that if you ask a crowd of developers if they are doing agile, they know if they are or not (or if they’re doing it kinda half-assed and so do the slow half-raise of the hand).  We need to get DevOps to that same place.  I find very few people (and even fewer who are at all informed) that disagree with the goals or results of DevOps – it’s more about confusion about what it is and how someone gets there that causes Joe Dev or Joe Operator to fret.  Aimlessness begets inertia.

You can’t say “we need culture change” and then just kick back and keep saying it and expect people to change their culture.  That’s not the way the world works.  You need specific prescriptive steps. 90% of people are followers, and they’ll do DevOps as happily as they’ve done agile, but they need a map to follow to get there.

3 Comments

Filed under DevOps