Tag Archives: appsec

AppSec USA 2012 Is Here (in Austin)!

AppSec USA 2012, the big OWASP security convention, is here in Austin this year!  And the agile admin’s own @wickett is coordinating it.

“Why do I care if I’m not a security wonk,” you ask? Well, guess what, the security world is waking up and smelling the coffee – this isn’t like security conventions were even just a couple years ago.  There’s a Cloud track and a Rugged DevOps track.

We have like 20 people going from Bazaarvoice. It’s two days, Thursday and Friday (yes, tomorrow – I don’t know why James didn’t post this earlier, sorry) and just $500. So it’s cheap and low impact.

And who’s speaking?  Well, how about Douglas Crockford, inventor of JSON?  And Gene Kim, author of Visible Ops?  That’s not the usual infosec crowd, is it?  Also Michael Howard from Microsoft, Josh Corman from Akamai, a trio of Twitter engineers, Nick Galbreath (formerly of Etsy), Jason Chan from Netflix, Brendan Eich from Mozilla… This is a star-studded techie event that you want to be at!

I’ll be there and will report in…

1 Comment

Filed under Conferences, DevOps, Security

Security and the Rise (and Fall?) of DevOps

As I’ve been involved with DevOps and its approach of blending development and operations staff together to create better products, I’ve started to see similar trends develop in the security space. I think there’s some informative parallels where both can learn from each other and perhaps avoid some pitfalls.

Here’s a recent article entitled “Agile: Most security guys are useless” that states the problem succinctly. In successful and agile orgs, the predominant mindset is that if you’re not touching the product, you are semi-useless overhead. And there’s some truth to that. When people are segregated into other “service” orgs – like operations or security – the us vs. them mindset predominates and strangles innovation in its crib.

The main initial drive of agile was to break down that wall between the devs and the “business”, but walls remain that need similar breaking down. With DevOps, operations organizations faced with this same problem are innovating new approaches; a collaborative approach with developers and operations staff working together on the product as part of the same team. It’s working great for those who are trying it, from the big Web shops like Facebook to the enterprise guys like us here at NI. The movement is gathering steam and it seems clear to those of us doing it this way that it’s going to be a successful and disruptive pattern for adopters.

But let’s not pat ourselves on the back too much just yet. We still have a lot of opportunity to screw it up. Let’s review an example from another area.

In the security world, there is a whole organization, OWASP (the Open Web Application Security Project) whose goal is to promote and enable application security. Security people and developers, working together!  Dev+Sec already exists! Or so the plan was.

However, recently there have been some “shots across the bow” in the OWASP community.  Read Security People vs Developers and especially OWASP: Has It Reached A Tipping Point? The latter is by Mark Curphey, who started OWASP. He basically says OWASP is becoming irrelevant because it’s leaving developers behind. It’s becoming about “security professionals” selling tools and there’s few developers to be found in the community any more.

And this is absolutely true.  We host the Austin OWASP chapter here at NI’s Austin campus, and two of the officers are NI employees. We make sure and invite NI developers to come to OWASP. Few do, at least not after the first couple times.  I asked some of the devs on our team why not, and here’s some answers I got.

  • I want to leave sessions by saying, “I need to think about this the next time I code”. I leave sessions by saying, “that was cool, I can talk about this at a happy hour”. If I could do the former, I’d probably attend most/all the sessions.
  • A lot of the sessions don’t seem business focused and it is hard to relate. Demos are nicer; but a lot of times they are so specific (for example: specific OS + specific Java Version + specific javascript library = hackable) that it’s not actionable.
  • “Security people” think “developers” don’t know what they are doing and don’t care about security. Which to developers is offensive. We like to write secure applications; sometimes we just find the bugs too late….
  • I’ve gone to, I think, 4 OWASP meetings.  Of those, I probably would only have recommended one of them to others – Michael Howard’s.  I think it helped that he was a well-known speaker and seemed to have a developer focus. So, well-known speakers, with a compelling and relevant subject..  Even then, the time has to be weighed against other priorities.  For example, today’s meeting sounds interesting, but not particularly relevant.  I’ll probably skip it.

In the end, the content at these meetings is more for security pros like pen testers, or for tool buyers in security or sysadmin groups. “How do I code more securely” is the alleged point of the group but frankly 90% of the activity is around scanners and crackers and all kinds of stuff that is fine but should be simple testing steps after the code’s written securely in the first place.

As a result there have been interesting ideas coming from the security community that are reminiscent of DevOps concepts. Pen tester @atdre did a talk here to the Austin OWASP chapter about how security testers engaging with agile teams “from the outside” are failing, and shouldn’t we instead embed them on the team as their “security buddy.” (I love that term.  Security buddy. I hate my “compliance auditor” without even meeting the poor bastard, but I like my security buddy already.) At the OWASP convention LASCON, Matt Tesauro delivered a great keynote similarly trying to refocus the group back on the core problem of developing secure software; in fact, they’re co-sponsoring a movement called “Rugged” that has a manifesto similar to the Agile Manifesto but is focused on security, availability, reliability, et cetera. (As a result it’s of interest to us sysadmin types, who are often saddled with somehow applying those attributes in production to someone else’s code…)

The DevOps community is already running the risk of “leaving the devs behind” too.  I love all my buddies at Opscode and DTO and Puppet Labs and Thoughtworks and all. But a lot of DevOps discussions have started to be completely sysadmin focused as well; a litany of tools you can use for provisioning or monitoring or CI. And that wouldn’t be so bad if there was a real entry point for developers – “Here’s how you as a developer interact with chef to deploy your code,” “Here’s how you make your code monitorable”. But those are often fringe discussions around the core content which often mainly warms the cockles of a UNIX sysadmin’s heart. Why do any of my devs want to see a presentation on how to install Puppet?  Well, that’s what they got at a recent Austin Cloud User Group meeting.

As a result, my devs have stopped coming to DevOps events.  When I ask them why, I get answers similar to the ones above for why they’re not attending OWASP events any more. They’re just not hearing anything that is actionable from the developer point of view. It’s not worth the two hours of their valuable time to come to something that’s not at all targeted at them.

And that’s eventually going to scuttle DevOps if we let it happen, just as it’ll scuttle OWASP if it continues there. The core value of agile is PEOPLE over processes and tools, COLLABORATION over negotiation. If you are leaving the collaboration behind and just focusing on tools, you will eventually fail, just in a more spectacular and automated fashion.

The focus at DevOpsDays US 2010 was great, it was all about culture, nothing about tools. But that culture talk hasn’t driven down to anything more actionable, so tools are just rising up to fill the gap.

In my talk at that DevOpsDays I likened these new tools and techniques to the introduction of the Minie ball to rifles during the Civil War. In that war, they adopted new tools and then retained their same old tactics, walking up close in lines designed for weapons with much shorter ranges and much lower accuracy – and the slaughter was profound.

All our new DevOps tools are great, but in the same way, if we don’t adapt our way of thinking to them, they will make our lives worse, not better, for all their vaunted efficiency. You can do the wrong thing en masse and more quickly. The slaughter will similarly be profound.

A sysadmin suddenly deciding to code his own tools isn’t really the heart of DevOps.  It’s fine and good, and I like seeing more tools created by domain experts. But the heart of DevOps, where you will really see the benefits in hard ROI, is developers and operations folks collaborating on real end-consumer products.

If you are doing anything DevOpsey, please think about “Why would a developer care about this?” How is it actionable to them, how does it make their lives easier?  I’m a sysadmin primarily, so I love stuff that makes my job easier, but I’ve learned over the years that when I can get our devs to leverage something, that’s when it really takes off and gives value.

The same thing applies to the people on the security side.  Why do we have this huge set of tools and techniques, of OWASP Top 10s and Live CDs and Metasploits and about a thousand wonderful little gadgets, but code is pretty much as shitty and insecure as it was 20 years ago? Because all those things try to solve the problem from the outside, instead of targeting the core of the matter, which is developers developing secure code in the first place.  And to do that, it’s more of a hearts-and-minds problem than a tools-and-processes problem.

That’s a core realization that operations folks, and security folks, and testing folks, and probably a bunch of other folks need to realize, deeply internalize, and let it change the way they look at the world and how they conduct their work.

5 Comments

Filed under DevOps, Security

OPSEC + Agile = Security that works

Recently I have been reading on OPSEC (operations security).  OPSEC, among many things, is a process for security critical information and reducing risk.  The 5 steps in the OPSEC process read as follows:

  1. Identify Critical Information
  2. Analyze the Threat
  3. Analyze the Vulnerabilities
  4. Assess the Risk
  5. Apply the countermeasures

It really isn’t rocket science, but it is the sheer simplicity of the process that is alluring.  It has traditionally been applied in the military and has been used as a meta-discipline in security.  It assumes that other parties are watching, sort of like the aircraft watchers that park near the military base to see what is flying in and out, or the Domino’s near the Pentagon that reportedly sees a spike in deliveries to the Pentagon before a big military strike.  Observers are gathering critical information on your organization in new ways that you weren’t able to predict.  This is where OPSEC comes in.

Since there is no way to predict what data will be leaking from your organization in the future and it is equally impossible to enumerate all possible future risk scenarios, then it becomes necessary to perform this assessment regularly.  Instead of using an annual review process with huge overhead and little impact (I am looking at you, Sarbanes-Oxley compliance auditors), you can create a process to continue to identify risks in an ever-changing organization while lessening risk.  This is why you have a security team, right?  Lessening the risk to the organization is the main reason to have a security team.  Achieving PCI or HIPPA compliance is not.

Using OPSEC as a security process poses huge benefits when aligned with Agile software development principles.  The following weekly assessment cycle is promoted by SANS in their security training course.  See if you can see Agile in it.

The weekly OPSEC assessment cycle:

  1. Identify Critical Information
  2. Assess threats and threat sources including: employees, contractors, competitors, prospects…
  3. Assess vulnerabilities of critical information to the threat
  4. Conduct risk vs. benefit analysis
  5. Implement appropriate countermeasures
  6. Do it again next week.

A weekly OPSEC process is a different paradigm from the annual compliance ritual.  The key of the security is just that: lessen risk to the organization.   Iterating through the OPSEC assessment cycle weeklymeans that you are taking frequent and concrete steps to facilitate that end.

Leave a comment

Filed under Security

LASCON 2010: Why The Cloud Is More Secure Than Your Existing Systems

Why The Cloud Is More Secure Than Your Existing Systems

Saving the best of LASCON 2010 for last, my final session was the one I gave!  It was on cloud security, and is called “Why The Cloud Is More Secure Than Your Existing Systems.”  A daring title, I know.

You can read the slides (sadly, the animations don’t come through so some bits may not make sense…).  In general my premise is that people that worry about cloud security need to compare it to what they can actually do themselves.  Mocking a cloud provider’s data center for not being ISO 27001 compliant or having a two hour outage only makes sense if YOUR data center IS compliant and if your IT systems’ uptime is actually higher than that.  Too much of the discussion is about the FUD and not the reality.  Security guys have this picture in their mind of a super whizbang secure system and judge the cloud against that, even though the real security in the actual organization they work at is much less.  I illustrate this with ways in which our cloud systems are beating our IT systems in terms of availablity, DR, etc.

The cloud can give small to medium businesses – you know, the guys that form 99% of the business landscape – security features that heretofore were reserved for people with huge money and lots of staff.  Used to be, if you couldn’t pay $100k for Fortify, for instance, you just couldn’t do source code security scanning.  “Proper security” therefore has an about $1M entry fee, which of course means it’s only for billion dollar companies.  But now, given the cloud providers’ features, and new security as a service offerings, more vigorous security is within reach of more people.  And that’s great -building on the messages in previous sessions from Matt’s keynote and Homeland Security’s talk, we need pervasive security for ALL, not just for the biggest.

There’s more great stuff in there, so go check it out.

1 Comment

Filed under Cloud, Conferences, Security

LASCON 2010: HTTPS Can Byte Me

HTTPS Can Byte Me

This paper on the security problems of HTTPS was already presented at Black Hat 2010 by Robert Hansen, aka “RSnake”, of SecTheory and Josh Sokol of our own National Instruments.

This was a very technical talk so I’m not going to try to reproduce it all for you here.  Read the white paper and slides.  But basically there are a lot of things about how the Web works that makes HTTPS somewhat defeatable.

First, there are insecure redirects, DNS lookups, etc. before you ever get to a “secure” connection.  But even after that you can do a lot of hacking from traffic characterization – premapping sites, watching “encrypted” traffic and seeing patterns in size, get vs post, etc.  A lot of the discussion was around doing  things like making a user precache content to remove noisiness via a side channel (like a tab; browsers don’t segment tabs).  Anyway, there’s a lot of middle ground between “You can read all the traffic” and “The traffic is totally obscured to you,” and it’s that middle ground that it can be profitable to play in.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Tell Me Your IP And I’ll Tell You Who You Are

Tell Me Your IP And I’ll Tell You Who You Are

Noa Bar-Yosef from Imperva talked about using IP addresses to identify attackers – it’s not as old and busted as you may think.  She argues that it is still useful to apply IP intelligence to security problems.

Industrialized hacking is a $1T business, not to mention competitive hacking/insiders, corporate espionage…  There’s bad people trying to get at you.

“Look at the IP address” has gotten to where it’s not considered useful, due to pooling from ISPs, masquerading, hopping… You certainly can’t use them to prove in court who someone is.

But… home users’ IPs persist 65% more than a day, 15% persist more than a week.  A lot of folks don’t go through aggregators, and not all hopping matters (the new IP is still in the same general location).  So the new “IP Intelligence” consists of gathering info, analyzing it, and using it intelligently.

Inherent info an IP gives you – its type of allocation, ownership, and geolocation.  You can apply reputation-based analytics to them usefully.

Geolocation can give context – you can restrict IPs by location, sure, but also it can provide “why are they hitting that” fraud detection.  Are hits from unusual locations, simultaneous from different locations,  or from places really different from what the account’s information would indicate?  Maybe you can’t block on them – but you can influence fuzzy decisions.  Flag for analysis. Trigger adaptive authentication or reduced functionality.

Dynamically allocated addresses aren’t aggregators, and 96% of spam comes from them.

Thwart masquerading – know the relays, blacklist them.  Check accept-language headers, response time, path…  Services provide “naughty” lists of bad IPs – also, whitelists of good guys.  Use realtime blacklist feeds (updated hourly).

Geolocation data can be obtained as a service (Quova) or database (Maxmind). Reputation data is somewhat fragmented by “spammer” or whatnot, and is available from various suppliers (who?)

I had to bail at this point unfortunately…  But in general a sound premise, that intel from IPs is still useful and can be used in a general if not specific sense.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Mitigating Business Risks With Application Security

Mitigating Business Risks With Application Security

This talk was by Joe Jarzombek, Department of Homeland Security.  Normally I wouldn’t go to a management-track session called something like this, when I looked at the program this was my third choice out of all three tracks.  But James gave me a heads up that he had talked with Joe at dinner the previous night and he was engaging and knew his stuff, and since there were plenty of other NI’ers there to cover the other sessions, I took a chance, and I wasn’t disappointed!

From a pure “Web guy” standpoint it wasn’t super thrilling, but in my National Instruments hat, where we make hardware and software used to operate large hadron colliders and various other large scale important stuff where you would be very sad if things went awry with it, and by sad I mean “crushed to death,” it was very interesting.

Joe runs the DHS National Cyber Security Division’s new Software Assurance Program.  It’s a government effort to get this damn software secure, because it’s pretty obvious that events on a 9/11 kind of scale are more and more achievable via computer compromise.

They’re attempting to leverage standards and, much like OWASP’s approach with the Web security “Top 10,” they are starting out by pushing on the Top 25 CWE (Common Weakness Enumeration) errors in software.  What about the rest?  Fix those first, then worry about the rest!

Movement towards cloud computing has opened up people’s eyes to trust issues.  The same issues are relevant to every piece of COTS software you get as part of your supply chain!  It requires a profound shift from physical to virtual security.

“We need a rating scheme!”  Like food labels, for software.  They’re thinking about it in conjunction with NIST and OWASP as a way to raise product assurance expectations.

He mentioned that other software areas like embedded and industrial control might have different views on the top 25 and they’re very interested in how to include those.

They’re publishing a bunch of pocket guides to try to make the process accessible.  There’s a focus on supply risk chain management, including services.

Side note – don’t disable compiler warnings!  Even the compiler guys are working with the sec guys.  If you disable compiler warnings you’re on the “willful disregard” side of due diligence.

You need to provide security engineering and risk-based analysis throughout the lifecycle (plan, design, build, deploy) – that generates more resilient software products/systems.

  • Plan – risk assessment
  • Design – security design review
  • Build – app security testing
  • Deploy – SW support, scanning, remediation

They’re trying to incorporate software assurance programs into higher education.

Like Matt, he mentioned the Rugged Software Manifesto.  Hearing this both from “OWASP guy” and “Homeland security guy” convinced me it was something that bore looking into.  I like the focus on “rugged” – it’s more than just being secure, and “security” can seem like an ephemeral concept to untrained developers.  “Rugged” nicely encompasses reliable, secure, resilient…  I like it.

You can do software assurance self assessment they provide on their Web site to get started.

It was interesting, at times it seemed like Government Program Bureaucratese but then he’d pull out stuff like the CWE top 25 and the Rugged Software Manifesto – they really seem to be trying to leverage “real” efforts and help use the pull of Homeland Security’s Cyber Security Division to spread them more widely.

Leave a comment

Filed under Conferences, Security