Monthly Archives: December 2010

The Rise of the Security Industry

In late 2007 Bruce Schneier, the internationally renowned security technologist and author, wrote an article for IEEE Security & Privacy. The ominously named article: The Death of the Security Industry predicted the future of the security industry or lack thereof.  In it he predicts that we would treat security as merely a utility like we use water and power today.  The future is one where “large IT departments don’t really want to deal with network security. They want to fly airplanes, produce pharmaceuticals, manage financial accounts, or just focus on their core business.”

Schneier closes with, “[a]s IT fades into the background and becomes just another utility, users will simply expect it to work. The details of how it works won’t matter.”

Looking back 3 years and having the luxury of hindsight, it is understandable to see why he thought the security industry would become a utility.  In part, it has become true.  Utility billing is the rage for infrastructure (hello cloud computing) and more and more people are viewing the network as a commodity.  Bandwidth has increased in performance and decreased in cost.  Continually people are outsourcing pieces of their infrastructure and non-critical IT services to vendors or to offshore employees.

But there are three reasons why I disagree with the The Death of the Security Industry and I believe we are actually going to see a renaissance of the security industry over the next decade.

1. Data is valuable. We can’t think of IT as merely the computers and network resources we use.  We need to put the ‘I’ back in IT and remember why we play this game in the first place.  Information.  Protecting the information (data) will be crucial over the long haul.  Organizations do not care about new firewalls or identity management as a primary goal, however they do care about their data.  Data is king.  Organizations that succeed will be ones that master navigating a new marketplace that values sharing while keeping their competitive edge by safe-guarding and protecting their critical data.

2. Security is a timeless profession. When God gave Adam and Eve the boot from the Garden of Eden, what did he  do next?   He used a security guard to keep them out of the Garden for good.  Security has been practiced as long as people have been people.  As long as you have something worth protecting (see ‘data is valuable’ in point 1) you will need resources to protect it.   Our valuable data is being transferred, accessed and modified on computing devices and will need to be protected.  If people can’t trust that their data is safe then they will not be our customers.  The CIA security triad (Confidentiality, Integrity, and Availability) needs to remain in tact for consumers to trust organizations with their data and if that data has any value to the organization, it will be need to be protected.

3. Stuxnet. This could be called the dawn of a new age of hacking.  Gone are the days of teenagers running port scans from their garages. Be ready to start seeing hackers using sophisticated techniques that simultaneously attack multiple vectors to gain access on their targets.  I am not going to spread FUD (Fear Uncertainty and Doubt) around, but I believe that Stuxnet is just the beginning.

In addition to how Stuxnet was executed, it is just as interesting to see what was attacked.  This next decade will prove to be a change in the type of targets attacked.  In the 80’s it was all about hacking phones and more physical targets, the 90’s were the days of the port-scanning and Microsoft Windows hacking, the last decade has primarily focused on web and application data.  With Stuxnet, we are seeing the revitalization of hacking where it is returning to its roots of hacking targets that are physical in nature such as SCADA systems that control a building’s temperature systems.  The magazine 2600 has been publishing a series on SCADA hacking over the last 18 months.  What makes it even more interesting is that almost every device you buy these days has a web interface on it, so never fear, the last 10 years spent hacking websites will come in real handy when looking at hacking control systems.

In closing, I think we are a long way off from seeing the death of the security industry.  As our data becomes more valuable, the more we will need to secure.  Data is on the rise and with it comes the need for security.  Additionally as more and more of our world is controlled with computers, the targets become more and more interesting.  Be ready for the rise of the security industry.

Let me know what you think on twitter: @wickett

1 Comment

Filed under Security

Quora vs StackExchange

As a Web ops guy, I’ve used the Stack Exchange sites, especially Server Fault, a lot.  They’re the best place to go do technical Q&A without having to go immerse yourself in a specific piece of open source software and determine what bizarre way you’re supposed to get support on it (often a crufty forum, mailing list with bizarre culture, or an IRC channel).

However, they have started to, in my opinion, come apart at the seams. They started with the “holy trinity” of Stack Overflow (coders), Server Fault (admins), and Super User (users).  But lately they have expanded their scope to sites for non-techie areas, but have also started to fragment the technical areas.  So now if I have a question, I am confronted with separate communities for Server Fault, Linux & UNIX, Ubuntu, and more. Or even worse, Stack Exchange vs. “Programmers” vs language specific lists. This basically heavily segments the population and leads to the same problems that the weird little insular mailing lists have. It makes me use the SEs a lot less. I don’t want to have to somehow engage with 10 different communities to answer my everyday questions (and I sure as hell am not going to follow 10 to answer questions), so in my opinion they are cannibalizing their success and it will implode of its own weight and become no better than any Internet forum site.

Recently I started seeing tweets about Quora from @scobleizer, and it said stuff like “is it the future of blogging?” and was being pitched as some twitter-blog hybrid which of course caused me to ignore it.  But then it started getting a lot more activity and I thought I’d go check it out.  But if you go to the Quora page, you can’t see anything without logging in.  And of course if you log in with Twitter or Facebook it wants “everything from you and your friends, ever.”  So I wandered off again.

Finally I gave in and went over and logged in, and it’s actually pretty neat – it’s Q&A, like Stack Exchange, but instead of segmentation into different sub-communities, it uses the typical tag/follow/etc. Web 2.0 paradigm. So “Stack Exchange plus Twitter” is probably the best analogy. Now on the one hand that more unmanaged approach runs the risk of becoming like “Yahoo! Answers” – utter crap, full of unanswered questions and spammers and psychos – but on the other hand, I like my topics not being pushed down into little boxes where you can’t get an answer without mastering the arcane rules of that community (like the hateful Cygwin mailing list, where the majority of new posters are chased off with bizarre acronyms telling them they are using email wrong).  The simple addition of up/down voting is 80% of the value of what SE gives over forums, so will that carry the day?

Now maybe it’s because they’re having capacity problems, but the biggest problem with Quora IMO is that you don’t get to see any of it when you go there until you log in and give them access to all your networks and whatnot, which I find obnoxious. But if they fix that, then I think given the harmful direction SE is going, it may be the next big answer for Q&A.


Filed under General

OPSEC + Agile = Security that works

Recently I have been reading on OPSEC (operations security).  OPSEC, among many things, is a process for security critical information and reducing risk.  The 5 steps in the OPSEC process read as follows:

  1. Identify Critical Information
  2. Analyze the Threat
  3. Analyze the Vulnerabilities
  4. Assess the Risk
  5. Apply the countermeasures

It really isn’t rocket science, but it is the sheer simplicity of the process that is alluring.  It has traditionally been applied in the military and has been used as a meta-discipline in security.  It assumes that other parties are watching, sort of like the aircraft watchers that park near the military base to see what is flying in and out, or the Domino’s near the Pentagon that reportedly sees a spike in deliveries to the Pentagon before a big military strike.  Observers are gathering critical information on your organization in new ways that you weren’t able to predict.  This is where OPSEC comes in.

Since there is no way to predict what data will be leaking from your organization in the future and it is equally impossible to enumerate all possible future risk scenarios, then it becomes necessary to perform this assessment regularly.  Instead of using an annual review process with huge overhead and little impact (I am looking at you, Sarbanes-Oxley compliance auditors), you can create a process to continue to identify risks in an ever-changing organization while lessening risk.  This is why you have a security team, right?  Lessening the risk to the organization is the main reason to have a security team.  Achieving PCI or HIPPA compliance is not.

Using OPSEC as a security process poses huge benefits when aligned with Agile software development principles.  The following weekly assessment cycle is promoted by SANS in their security training course.  See if you can see Agile in it.

The weekly OPSEC assessment cycle:

  1. Identify Critical Information
  2. Assess threats and threat sources including: employees, contractors, competitors, prospects…
  3. Assess vulnerabilities of critical information to the threat
  4. Conduct risk vs. benefit analysis
  5. Implement appropriate countermeasures
  6. Do it again next week.

A weekly OPSEC process is a different paradigm from the annual compliance ritual.  The key of the security is just that: lessen risk to the organization.   Iterating through the OPSEC assessment cycle weeklymeans that you are taking frequent and concrete steps to facilitate that end.

Leave a comment

Filed under Security

JClouds “State of DevOps”

Hey, looks like I got quoted in Adrian Cole’s new “State of DevOps” presentation.  Some quick thoughts on current state and futures of DevOps from a bunch of important DevOps people and then little old me.

Leave a comment

Filed under DevOps

DevOps State of the Union Part 2

I gave my thoughts on the first bunch of the essays in the great State of DevOps series on Agile Web Operations. And more are coming, so here’s my roundup!

DevOps: Cleans and Shines Without Harsh Scratching by Julian “The Build Doctor” Simpson is a little bit of history, a little bit of prediction, and a little bit of dirty-sounding British phrases like “Discovering jam at the bunfight.”

DevOps: These Soft Parts by John Allspaw reminds us that the core “soft” skills of communication and collaboration lie at the heart of DevOps practice.

The State of DevOps by James Turnbull talks about a number of things that resonate specifically with me.  The three principles that the mainframe guys taught him.  The fact that sure, the core “super ops” people are already doing this but the vast masses aren’t. And that devs and ops both have a lot to learn from one another.

The Implications of Infrastructure as Code is the really meaty one, by Stephen Nelson-Smith. This is one of the best of the batch, it’s full of highly specific best practices derived from combining development with operations using Extreme Programming.  Take your time and read this – if you are working on implementing DevOps in your shop, this one’s the essay that you need to be taking notes from to build into your own processes.  Bang up job!

Props to Matthias Marschall for putting this series together; it’s really great to hear the different takes on this emerging area!  There’s two more coming, so I’ll be staying tuned.  If you haven’t been following these, please go read them all!


Leave a comment

Filed under DevOps

Web Operations Standing Orders

I was just reading the newest “state of DevOps” post on Agile Web Operations by James Turnbull and he mentions a set of rules some mainframe guys taught him back in the day:

  • Think before you act
  • Change one thing at a time not ten things at once
  • If you fucked up then own up

This reminded me of the standing orders I had as manager of our Web Ops shop here for many years.  Mine were:

Web Operations Standing Orders

  1. Make it happen
  2. Don’t fuck up
  3. There’s the right way, the wrong way, and the standard way

This was my trenchant way of summing up the proper role of an innovative Web operations person.They were in priority order, like Asimov’s Three Laws of Robotics – #1 is higher priority than #2, for example. I told people “when in doubt as to how to proceed on a given issue – always refer back to the Standing Orders for guidance.

First, make it happen. Your job is to be agile and to get stuff out the door, and to help the developers accomplish their goals, NOT to be the “Lurker at the Threshold” and block attempts at accomplishing business value.  Sure, we were often in the position of having to convince teams to balance performance, availability, and security against whatever brilliant idea they pooped out that day, but priority #1 was to find ways to make things happen, not find ways to make them not happen – which seems to be the approach of many a sysadmin. In the end, we’re all here to get things done and create maximal business value, and though there is the rare time when ‘don’t do anything’ is the path to that – I would be willing to say that 99% of the time, it’s not.

Second, don’t fuck up.  All of Turnbull’s mainframe-guy points elaborate on this core value.  (I imagine most of the friction between him and them was that they didn’t share Standing Order 1 as a core value.) As a Web operations person, you have a lot of rope to hang everyone with – you control the running system.  If you decide to cowboy something without testing it in dev, that’s bad.  For right or wrong, developers mess up code all the time but operations folks are expected to be perfect and not make mistakes that affect production. So be perfect. Test, double check, use the electronic equivalent of OSHA ‘red tags”… Be careful.

And finally, there’s the right way, the wrong way, and the standard way.  Innovation is great, but any innovation has to be folded back via documentation, automation, etc. to become the standard way. If there’s a documented process on how to build a system, you follow that process to the letter, I don’t care how experienced you are, or if your way is “better” for whatever your personal definition of “better” is.  If the process seems “wrong,” then either a) there’s some subtle reason it’s done that way you need to figure out, or b) congratulations you can improve the standard procedure everyone uses and you’ve bettered the human race. As a side note, there are explicit standards and implicit standards.  If you go into a directory and see a lot of rolled log files called “file.YYYYMMDD”, and you decide to manually make one, for God’s sake don’t call it “file.MM-DD-YY.”  You never know what automated process may be consuming that, or at least you’re pissing someone off later when they do a “ls”.  Have standardization as a core value in your work.

If I were to sum them up in one word, the three item list basically reduces to three core values:

  1. Be Agile
  2. Be Rugged
  3. Be Standard

But of course it’s not Web Ops if you don’t have some salty language in there. And I fended off attempts over the years to add a #4, mainly from Peco.  “4. Profit” as an homage to the Underpants Gnomes was probably the leading contender.

Leave a comment

Filed under DevOps

Specific DevOps Methodologies?

So given the problem of DevOps being somewhat vague on prescriptions at the moment, let’s look at some options.

One is to simply fit operations into an existing agile process used by the development team.  I read a couple things lately about DevOps using XP (eXtreme Programming) – @mmarschall tweeted this example of someone doing that back in 2001, and recently I saw a great webcast by @LordCope on a large XP/DevOps implementation at a British government agency.

To a degree, that’s what we’ve done here – our shop doesn’t use XP or really even Scrum, more of a custom (sometimes loosely) defined agile process.  We decided for ops we’ll use the same processes and tools – follow their iterations, Greenhopper for agile task management, same bug tracker (based off HP), same spec and testing formats and databases.  Although we do have vestiges of our old “Systems Development Framework,” a more waterfally approach to collaboration and incorporation of systems/ops requirements into development projects.  And should DevOps be tied to agile only, or should it also address what collaboration and automation are possible in a trad/waterfall shop?  Or do we let those folks eat ITIL?

Others have made a blended process, often Scrum + Lean/kanban hybrids with the Lean/kanban part being brought in from the ops side and merged with more of a love for Scrum from the dev side. Though some folks say Scrum and kanban are more in opposition, others posit a combined “Scrumban.”

Or, there’s “create a brand new approach,” but that doesn’t seem like a happy approach to me.

What have people done that works?  What should be one of the first proposed roadmaps for people who want to do DevOps but want more of something to follow than handwaving about hugs?


Filed under DevOps