Here’s my LASCON 2016 presentation on Lean Security, explaining how and why to apply Lean Software principles to information security!
Tag Archives: appsec
James and I have been talking lately about the conjunction of Lean and Security. The InfoSec world is changing rapidly, and just as DevOps has incorporated Lean techniques into the systems world, we feel that security has a lot to gain from doing the same.
We did a 20 minute talk on the subject at RSA, you can check out the slides and/or watch the video:
While we were there we were interviewed by Derek Weeks. Read his blog post with a transcript of the interview, and/or watch the interview video!
We’ll be writing more about it here, but we wanted to get a content dump out to those who want it!
Last week we had a DevOps track branded “CD Summit” at Innotech Austin, run by devops.com, and the agile admins were there!
I did a presentation about the various DevOps transformations I had a leadership role in at National Instruments and Bazaarvoice:
And James Wickett did a presentation on Application Security Epistemology in a Continuous Delivery World:
Jez Humble also spoke, as well as a batch of other folks including Austinite Boyd Hemphill and “our friend from Chicago” JP Morgenthal. Once those slides are all posted I’ll pass the link on to you all!
I’m afraid I only got to one session in the afternoon, but I have some good interviews coming your way in exchange!
User Authentication For Winners!
I didn’t get to attend but I know that Karthik’s talk on writing a user auth system was good, here are the slides. When we were at NI he had to write the login/password/reset system for our product and we were aghast that there was no project out there to use, you just had to roll your own in an area where there are so many lurking security flaws. He talks about his journey and you should read it!
AWS CloudHSM And Why It Can Revolutionize Cloud
Oleg Gryb (@oleggryb), security architect at Intuit, and Todd Cignettei, Sr. Product Manager with AWS Security.
Oleg says: There are commonly held concerns about cloud security – key management, legal liability, data sovereignty and access, unknown security policies and processes…
CloudHSM makes objects in partitions not accessible by the cloud provider. It provides multiple layers of security.
[Ed. What is HSM? I didn’t know and he didn’t say. Here’s what Wikipedia says.]
Luckily, Todd gets up and tells us about the HSM, or Hardware Security Module. It’s a purpose built appliance designed to protect key material and perform secure cryptographic operations. The SafeNet Luna SA HSM has different roles – appliance administrator, security officer. It’s all super certified and if tampered with blows up the keys.
AWS is providing dedicated access to SafeNet Luna SA HSM appliances. They are physically in AWS datacenters and in your VPC. You control the keys; they manage the hardware but they can’t see your goodies. And you do your crypto operations there. Here’s the AWS page on CloudHSM.
They are already integrated with various software and APIs like Java JCA/JCE.
It’s being used to encrypt digital content, DRM, securing financial transactions (root of trust for PKI), db encryption, digital signatures for real estate transactions, mobile payments.
Back to Oleg. With the HSM, there’s some manual steps you need to do, Initialize the HSM, configure a server and generate server side certs, generate a client cert on each client, scp the public portion to the server to register it.
Normal client cert generation requires an IP, which in the cloud is lame. You can isntead use a generic client name and use the same one on all systems.
You put their LunaProvider,jar in your Java CLASSPATH and add the provider to java/security and you’re good to go.
Making a Luna HA array is very important of course. If you get two you can group them up.
Suggested architecture – they ahve to run in a VPC. “You want to put on Internet? Is crazy idea! Never!”
Crypto doesn’t solve your problem, it just moves it to another place. How do you get the secrets onto your instances? When your instance starts, you don’t want those creds in S3 or the AMI…
So at instance bootstrap, send a request to a server in an internal DC with IP, instance ID, public and local hostanmes, reservation ID, instance type… Validate using the API including instance start time, validate role, etc. and then pass it back. Check for dupes. This isn’t perfect but what are ya gonna do? You can assign a policy to a role and have an instance profile it uses.
He has written a Python tool to help automate this, you can get it at http://sf.net/p/lunamech.
Everyone shuffles in slowly on the second morning of the con. I spent the pre-keynote hour with other attendees sitting around looking tired and comparing notes on gout symptoms. (PSA: if the ball of your foot starts hurting really bad one day, it’s gout, take a handful of Advil and go to your doctor immediately.)
- Impact Security
You can also see a bunch of great pictures from the event courtesy Catherine Clark!
The keynote this morning is from Robert “RSnake” Hansen, now of White Hat. It’s about blind spots we all have in security. Don’t take this as an attack, be self reflective.
Blindspot #1 – Network & Host Security
Internetworked computers is a very complex system and few of us 100% understand every step and part of it.
How many people do network segregation, have their firewall on an admin network, use something more secure than a default Linux install for their webservers, harden their kernel, log off-host and log beyond standard logs? These are all cheap and useful.
Like STS, it was only considered very tightly and the privacy considerations weren’t identified.
Blindspot #2 – Travel and OPSEC
Security used to be more of a game. Now the internet has become militarized. Don’t travel with your laptop. Because – secret reasons I’ll tell you if you ask. (?)
[Ed. Apparently I’m not security 3l33t enough to know what this is about, he really didn’t say.]
Blindspot #3 – Adversaries
You seed to be able to see things from “both sides” and know your adversary (personally ideally). Some of them want to talk! Don’t send them to jail, talk and learn. Yes, you can.
Blindspot #4 – Target Fixation
Vulnerabilities aren’t created equal. Severities vary. DREAD calculations vary widely. Don’t trust a scanner’s DREAD. Gut check but then do it on paper because your gut is often not correct. Often we have “really bad!” vulnerabilities we obsess about that aren’t really that severe.
Download Fierce to do DNS enumeration, do bing IP search, nmap/masscan/unicornscan for open ports.
Blindspot #5 – Compliance vs Security
These aren’t very closely related. Compliance gets you little badges and placated customers. Security actually protects your systems and data. Some people exercise willful negligence when they choose compliance over security. Compliance also pulls spend to areas that don’t help security. Compliance doesn’t care about what hackers do and it doesn’t evolve quickly.
Blindspot #6 – The Consumer
Consumers don’t really understand the most rudimentary basics of how the Internet works and really don’t understand the security risks of anything they do. They’re not bad or stupid but they can’t be expected to make well informed decisions. So don’t make security opt in.
We the security industry are not pro-consumer – we’re pro-business. Therefore we may be the first ones against the wall when the revolution comes. Give them their privacy now.
So pick one, work on it, we’ll be less blind!
Big Data, Little Security?
By Manoj Tripathi from PROS in Houston.
Big Data is still emerging and doesn’t have the mature security controls that older data platforms have.
Big data is a solution to needs for high volume, high velocity, and/or rich variety of data. Often distributed, resilient, and not hardware constrained (but sometimes is).
Hadoop is really a framework, with HDFS, Zookeeper, mapreduce, pig/hive, hbase (or cassandra?). He’ll talk a lot about this framework because it’s so ubiquitous.
NoSQL – Cassandra (eventually consistent, highly available, partition tolerant), MongoDB (consistent, partition tolerant).
Security is an afterthought in Big Data. It can be hard to identify sensitive data (schemaless). He says there’s provenance issues and enhanced insider attacks but I don’t know… Well, if you consider “Big Data” as just large mineable data separate from the actual technology, then sure, aggregate data insights are more valuable to steal… His provenance concern is that data is coming from less secured items like phones/sensors but that’s a bit of a strawman, the data sources for random smaller RDBMSes aren’t all high security either…
Due to the distributed architecture of hadoop etc. there’s a large attack surface. Plus Hadoop has multiple communication protocols, auth mechanisms, endpoint types… Most default settings in Hadoop on all of these are “no security” and you can easily bypass most security mechanisms, spoof, accidentally delete data… Anonymous access, username in URL, no perm checking, service level auth disabled, etc.
Hadoop added Kerberos support, this helps a lot. You can encrypt data in transit, use SSL on the admin dashboards.
But – it’s hard to configure, and enterprises might not like “another” auth infrastructure. It also has preconditions like no root access to some machines and no communication over untrusted networks. And it has a lot of insecure-by-default choices itself (symmetric keys, http SPNEGO has to be turned on in browsers, Oozie user is a super-user with auth disabled by default). No encryption at rest Kerberos RPC is unencrypted. Etc, etc, etc.
To Cassandra. Same deal. CLI has no auth by default. Insecure protocols.
NoSQL vulns – injections just like with SQL. Sensitive data is copied to various places, you can add new attributes to column families.
Practical Steps To Secure It
Cassandra – write your own authorization/authentication plugin. [Ed. Really?] But this has keyspace and column family granularity only. 1.2 has internal auth. Enable node-node and client-node encryption. If you do this at least it’s not naiively vulnerable. Also, use disk support for encryption.
Hadoop – basically wait for Project Rhino. Encryption, key mgmt, token based unified auth, cell level auth in hbase. Do threat modeling. Eliminate sensitive data, use field level encryption for sensitive fields, use OS or file level encryption mechanisms. Basically, run it in a secured environment or you’re in trouble. Apache Knox can enforece a single point of access for auth to Hadoop services but has scalability/reliability issues. Can turn on kerberos stuff if you have to…
Also. commercial hadoop/cassandra have more options.
We move into the afternoon of LASCON. The vendor room was all abuzz, complete with lockpicking village.
Stupid Webappsec Tricks
Zane Lackey, Security Engineer Manager from Etsy (@zanelackey)
Data driven security – look at your data instead of using your presuppositions about how attacks work.
Overwrite common methods but only phone home on interesting payloads.
8477 XSS attempts with mostly alert(), prompt(), confirm() (or multiples thereof). The payloads are mostly what you’d expect, “XSS,” document.cookie, integers (from scanners). Note you can’t match on “document.cookie” because it’ll already be expanded, so look for your domains, unique cookies, etc.
What else detects XSS well? Chrome’s XSS Auditor. Works great. But it can defend the user but doesn’t fix the XSS.
Server side attempt –
- Scan input for HTML esscapes/tag creation.
- If found, set flag to true and create array of hostile input.
- At output time, check flag, see if any hostile input is being output as valid HTML.
- If hostile input is being output, alert!
Need to fail open, stripping will break your app… And it should take you 20 minutes to push to production so detect to fix is a short path!
These are attack chains that can be instrumented. Detection step then exploit step.
Alert on SQL syntax errors showing up in your application today. It’s a bug even if it’s not an exploit.
Watch logs for unique sensitive db table names in requests. Occasional false positives are OK.
A SQL injection exploit response will be huge sized, often larger than is normal, detect that. Whitelist stuff that is supposed to give huge responses.
The more alerts you have in an attack chain the more visibility you have, but false positives happen. But if it’s happening in order down the chain, it’s probably not false.
“Temporary” debug stuff is permanent. How do you find this automatically? Access logs.
Map access logs to code paths. Endpoints that don’t get requests are anomalous. Alert off it then go take it out.
Cheapest way to find webapp vulns – Automation. Your best attackers are doing it manually anyway, but may as well beat out the kiddies. Break off-the-shelf scanners. They give off strong detection signals. User agents, request patterns, requests for stuff that doesn’t exist (*.asp or php on a Java site, for example).
Blocking IPs is easy but dangerous. You’ll break lots of legit things. IPs are not a strong correlation to identity.
- Classify a request as being from a scanner
- If yes, weight based on confidence
- Feed request into rate limiter (see Nick G’s rate limiting at scale talk) and drop if above threshold. They return a 439 “Request Not Handmade” 🙂
This doesn’t impact browsing but does scripting. Set your thresholds high; allows for false positives but a scanner will definitely peg it.
Be ready for the weirdness that is the Internet! Tried auto-banning accounts that do scanning. They saw 437 scanners over the last week and only 10 were authenticated and 5 were false positives. Browser plugins is our guess. So don’t auto-ban.
Attacks don’t always happen like you’d expect. Look at the data before you make decisions. Get the instrumentation you need to make those decisions.
“Run a bug bounty program and the Internet shows up!”
And of course you can then insert false data sets to screw with people and increase the cost of attack.
We don’t run scanners of our own because it’s a time sink and requires manual babysitting. We have taken WAF concepts and build them into the apps; since we deploy 30x/day we don’t need the “coverage in the meanwhile” functionality they provide.
Stalking a City for Fun and Frivolity
By Brendon O’Connor, CTO of Malice Afterthought and law student. About CreepyDOL wifi surveillance. He was wearing a kilt and started out by telling us we’d “lost the mandate of heaven.” Why is this? Well…
Everything leaks too much data. Privacy has been disregarded. Fundamental changes are needed to fix this. We need to democratize security – the government is the worst way to do this.
Especially the case of the US persecuting legitimate security researches like Weev for doing things like accessing public information on Web sites.
Wireless. Your devices advertise networks they know for all our convenience. His little doodads find your probe list of wifi locations and gps location. Now we need a distributed way of doing this on a large scale with no centralized control. Academic sensor networks are kinda like this, but expensive. Hence, the F-BOMB hardware gizmo.
Raspberry Pi based, 5W, $57.08. Uses connection to municipal wifi to phone home, with automatic portal-clickthrough. Reticle, leaderless command and control software. Uses TOR to go out.
CreepyDOL is distribute computation for distributed systems. Want to digest on the nodes to minimize net traffic. Centralized querying for centralized questions only. Filters include Nosiness, Observation, and Mining. Visualization using Unity (the game engine). Oh look, you can see a map mashup of people wandering around and click on them and find their name and other useful info.
Bottom line is that all these technologies leak info about you like it’s going out of style and it’s pretty simple to get Orwellian levels of visibility on you for one low price.
I missed this in favor of the next talk; I’ve seen about a dozen gauntlt presentations over time since I know James, but here’s the slides! Integrate security into your CI pipeline you freaks!
Penetration Testing: The Other Stuff
David Hughes, OWASP Austin president and Red Team analyst for GM.
This started as being about organizational skills… It’s general tips on making your life as a pen tester easier.
- Clients aren’t always right about their environment and scope creep can happen.
- Don’t assume you’ll have Internet, there’ll be proxies…
- Prep your tools and do updates and test it ahead of time.
- Rehearse your toolchain
- Title your terminals
- Use mind maps (Freemind), outline tools (NoteCase Pro) to organize tools, systems
- HTTP-Screenshot module does screenshots as nmap scans
- Use output options or pipe to a file
- Reporting – keep organized, do it as you go, use ASCIIdoc to take text to pdf
- Do things the easy way – look for low hanging fruit. DEfault credentials, bad passwords, cleartext, social engineering, dumpster diving, open wireless. Easy stuff is higher risk and the client cares more than esoteric crap.
- Don’t rush recon, look for clues, broken windows
- Have a plan (PTS framework) but range off as needed
- Protect your customer’s data
- Encrypt your stuff
- Have backups
- Learn and use a scripting language
- Don’t rub it in with the client
- Get involved with the community!
And that’s everything but the drinking… Time for happy hour and the mechanical bull!
Here’s some pictures of the volunteers hard at work, the speakers’ green room (there were chair massages there in the afternoon!), and organizer Josh Sokol with Robert “RSnake” Hansen!
Arriving at #LASCON 2013, hosted as usual at the Norris Conference Center, the first thing you see is the vintage video games throughout the lobby! As usual it’s well run and you get your metal badge and other doodads without any folderol; volunteers packed the venue ready to help folks with anything. I got a lovely media badge since I’m on the hook to blog/tweet it up while I’m there! It’s in a nice central location on Anderson Lane so getting there took a lot less time than my normal commute to work did.
- White Hat
- Trustwave/Spider Labs
- Critical Start
- SOS Security
Then everyone stood and raised their right hand to say the “LASCON pledge,” which consists of “I will not hack the Wi-fi,” “I will not social engineer other attendees and the nice Norris Conference Center staff who are hosting us,” and similar.
Then, the keynote!
Keynote- Nick Galbreath, The Origins of Insecurity
Slides at speakerdeck.com/ngalbreath!
If you’re in security, you should be bringing someone else from dev or ops or something here! We can’t get much done by ourselves.
There’s a lot of consternation about crypto and SSL and PKI lately. The math is sound! See FP’s “The NSA’s New Code Breakers” – it’s way easier to get access other ways. I don’t know of any examples of brute forcing SSL keys – it’s attacking data at rest or bypassing it altogether.
But what about the android/bitcoin break and alleged fix re: Java SecureRandom PRNG? I can’t find the fix checked in anywhere. Let’s look at SHA1PRNG. Where’s the spec? You’re forced to use it, where’s the open implementation, tests…
Basically everything went wrong in specification, implementation, testing, review, postmortem… Then the NIST’s Dual-EC-DRBG spec – slow and with a potential backdoor – but at least it’s not required by FIPS! It’s broken but not mandatory and we know it’s broken, so fair enough. It’s a “standard turd.” Standards aren’t a replacement for common sense. Known turdy in 2007. Why are you just removing it now? TLS 1.2 was approved in 2008, why don’t all browsers support it and no browsers support GCM mode? Old standards need augmentation and updates.
Fixing the CA system – four great ways, certificate pinning, pruning, HTTPS Strict-Transport-Security, certificate-transparency.org.
- Network Security – stuff you didn’t write
- App Security – stuff you did write
- Endpoint Security – stuff you run
IT internal tech is mostly Windows/Mac CM and patching, 99% C-based stuff.
Tech Ops – Routers, Linux, Core server (all C too)
- Input validation – not hard
- Configuration problems
- Logical problems – more interesting
- Language platform problems (most patches here also in C!)
Reactive work is patching, CM, fixing apps, patching infrastructure. You can focus your patching though – Win7 at current patches, Flash, Adobe, Java will get 99% of your problems, focus there – but it’s hard to do. But either you can do it trivially or it’s really hard.
Learn from the hardest apps to deploy. The Chrome model of self updating gets 97% of people within a version in 4-6 weeks. Android, not so good- driven more by throwing out phones than any ability to upgrade. They’re chipping stuff away from the OS and making more into apps to speed it up. Apple/iOS just figured out app auto-update. Desktop lags though. WordPress is starting background updates. BSD is automatically installing security updates at first boot.
Releasing faster and safely is a competitive advantage AND makes you more secure.
For desktop upgrades, can’t we do something with containers? Why only one version installed? How can we find out about problems from users faster? How do we make patching and deployment easy for the dumbest users?
Even info on “How do I configure Apache securely” is wide and random on the Web. Silently breaks all the time, and it’s simple compared to firewalls, ssh, VPN, DNS… Rat’s nests full of crap, while it gets easier and easier to put servers on the internet. How can we make it safe to configure a server and keep it secure?
Can we do this for application development? Ruby BrakeMan is great, it does static analysis on commit and sends you email about rookie mistakes. Why not for apache config? (Where did chkconfig go?)
PHP Crypt – great for legacy passwords and horrible for new ones. Approximately 0% chance of a dev getting its configuration right.
See @manicode’s best practices – have a business level API for that.
By default, every language has a non-crypto, insecure PRNG. So people use them. They are used for some science stuff, but seriously if you’re doing physics you’re going to link something else in. Being slightly slower for toy apps that don’t care about security isn’t a big deal. Make the default PRNG secure! And, there’s 100x more people interested in making things fast than making them secure, so make the default language PRNG secure and people will make it faster.
libinjection.client9.com to try to eliminate SQL injection! It’s C, fast, low false positives, plug in anywhere.
Products focus on blocking and offense/intrusion, but leave these areas (actual fixing) uncovered. Think globally, act locally. Even if you’re not a dev, most open source doesn’t have a security anything – join in!
Write fuzzers, compile with different flags, etc.
So think big, get involved, bring your friends!
Total discovered malware is growing geometrically year over year. There are a lot of “DIY malware creation kits” nowadays; SpyEye, Zeus… These are more oriented around online crime; the kits of yesteryear were more about pissing contests about “mine is better than yours” (VCL, PS-MPC). The variation they can create is larger as well.
Armoring tools exist now – PFE CX for example, claims to encrypt, compress, etc. your executable – but all the functions don’t always work and buyers don’t check. Indetecitbles.net is online and will do it! It was free but now it’s “hidden.”
Use a tool like ExeBundle to bundle up your malware and then share it out via whatever route (file sharing, google play, whatever). Or hacking and overwriting good wares – even those that bother publishing a hash to verify their software often keep it on the same Web site that is already getting hacked to change the executable, so the hash just gets changed too.
So you make your malware with a kit, put it through a crypter a realtime packer, an EXE binder, other armoring tools, then run through QA in terms of on premise and cloud AV, then you’re ready to go.
Targeted vs opportunistic attacks… Delivery is a lot easier when you can target.
Anyway, many of those new malware samples are really just the same core malware run through a different variety of armoring tools. They’re counted as different malware but should get grouped into families; he’s working on that at RSA now.
Besides the variation in malware, domains serving malware can rotate in minutes. Since the malware can be created so quickly it effectively defeats AV by generating too many unique signatures. Reversing has to be done but it takes weeks/months.
Demo: Creating Malware in 2 Minutes!
ZeuS Builder – bang, bot.exe, one every couple seconds. Unique but not hash-unique at this point. They look different on disk and in memory. Then runs Saw Crypter, in seconds it creates multiple samples from one ZeuS sample. Bang, automated generation of billlllllyuns of armored samples.
There’s really just a handful of kits behind all the malware, need new solutions that go after the tools and do signature-less detection.
From Gates to Guardians: Alternate Approaches to Product Security
Jason Chan, Director of Engineering from Netflix, in charge of security for the streaming product. Here are his slides on Slideshare!
Agile, cloud, continuous delivery, DevOps – traditional security doesn’t adapt well to these. We want to move fast and stay safe at Netflix.
The challenges are speed (rapid change) and scale. To address these…
- Culture – If your culture has moved towards rapid delivery, it’s innovation first. Don’t be “Doctor No” and go against your company culture, you won’t be successful. Adapt.
- Visibility – you need to be able to see whats going on in a big distributed system.
- Automation – no checklists and spreadsheets
At Netflix we do ~200+ pushes to production a day, 40M subscribers, 1000+ devices supported.
We have a lot of stuff on our site about this, it’s a big differentiator. “Freedom and responsibility” is the summary. No buck passing. Responsible disclosure program externally.
We’re moving towards “full stack engineers” that know some about appsec, online operations, monitoring and response, infrastructure/systems/cloud – that can write some kind of code. The security industry seems to be moving towards superspecialists, we don’t see that as successful.
2 week sprint model, JIRA Scrum workflow (CLDSEC project!). No standups, weekly midsprint meeting. Bullpen shared-space model.
Use their internal security dashboard (VPC, crypto, other services plug in and display their security metrics). Alerts send emails with descriptive subjects, the alert config, instructions/links as to where to check/what to do. Chat integration.
NSA asks, how do you verify software integrity in production? How do you know you’re not backdoored?
They have their Mimir dashboard that is a CI/CD dashboard, that tracks source code to build to deploy to JIRA ticket. Traceability!
Canary testing because code reviews don’t catch much. Deploy a new version and test it (regression, perf, security) and see if it’s OK. Automatic Canary Analyzer gets a confidence level – “99% GO!”
Simian Army does ongoing testing. Go to prod… Then the monkeys test it.
Security Monkey shows config change timestamps of security groups and stuff.
So they have Babou (the ocelot from Archer) that does file integrity monitoring. They use the immutable server pattern so checking is kinda easy, but you still can be running multiple canary versions at the same time so there’s not one “golden master.” This allows multiple baselines.
Q: How long did it take to make this change and implement? What were the triggers?
A: This push started when he started in 2011; previously IT security handled product security. He hired his first person last year and now they’re up to 10.
Q: What do you do earlier on in the lifecycle in arch and design (threat modeling etc.)?
A: Can’t be automated, the model here is optionally come engage us (with more aggressiveness for stuff that’s clearly sensitive/SOXey).
Q: So this finds problems but how do people know what to do in the first place, share mistakes cross teams?
A: As things happen, added libraries with training and documentation. But think of it as “libraries.”
Q: Competing with Amazon while renting their hardware? (Laaaaaame, the CEO has talked about this in multiple venues.)
A: AWS is the only real choice. Our CEOs talked.
Next – Lunch! No liveblog of lunch, you foodie voyeurs!
AppSec USA 2012, the big OWASP security convention, is here in Austin this year! And the agile admin’s own @wickett is coordinating it.
“Why do I care if I’m not a security wonk,” you ask? Well, guess what, the security world is waking up and smelling the coffee – this isn’t like security conventions were even just a couple years ago. There’s a Cloud track and a Rugged DevOps track.
We have like 20 people going from Bazaarvoice. It’s two days, Thursday and Friday (yes, tomorrow – I don’t know why James didn’t post this earlier, sorry) and just $500. So it’s cheap and low impact.
And who’s speaking? Well, how about Douglas Crockford, inventor of JSON? And Gene Kim, author of Visible Ops? That’s not the usual infosec crowd, is it? Also Michael Howard from Microsoft, Josh Corman from Akamai, a trio of Twitter engineers, Nick Galbreath (formerly of Etsy), Jason Chan from Netflix, Brendan Eich from Mozilla… This is a star-studded techie event that you want to be at!
I’ll be there and will report in…
As I’ve been involved with DevOps and its approach of blending development and operations staff together to create better products, I’ve started to see similar trends develop in the security space. I think there’s some informative parallels where both can learn from each other and perhaps avoid some pitfalls.
Here’s a recent article entitled “Agile: Most security guys are useless” that states the problem succinctly. In successful and agile orgs, the predominant mindset is that if you’re not touching the product, you are semi-useless overhead. And there’s some truth to that. When people are segregated into other “service” orgs – like operations or security – the us vs. them mindset predominates and strangles innovation in its crib.
The main initial drive of agile was to break down that wall between the devs and the “business”, but walls remain that need similar breaking down. With DevOps, operations organizations faced with this same problem are innovating new approaches; a collaborative approach with developers and operations staff working together on the product as part of the same team. It’s working great for those who are trying it, from the big Web shops like Facebook to the enterprise guys like us here at NI. The movement is gathering steam and it seems clear to those of us doing it this way that it’s going to be a successful and disruptive pattern for adopters.
But let’s not pat ourselves on the back too much just yet. We still have a lot of opportunity to screw it up. Let’s review an example from another area.
In the security world, there is a whole organization, OWASP (the Open Web Application Security Project) whose goal is to promote and enable application security. Security people and developers, working together! Dev+Sec already exists! Or so the plan was.
However, recently there have been some “shots across the bow” in the OWASP community. Read Security People vs Developers and especially OWASP: Has It Reached A Tipping Point? The latter is by Mark Curphey, who started OWASP. He basically says OWASP is becoming irrelevant because it’s leaving developers behind. It’s becoming about “security professionals” selling tools and there’s few developers to be found in the community any more.
And this is absolutely true. We host the Austin OWASP chapter here at NI’s Austin campus, and two of the officers are NI employees. We make sure and invite NI developers to come to OWASP. Few do, at least not after the first couple times. I asked some of the devs on our team why not, and here’s some answers I got.
- I want to leave sessions by saying, “I need to think about this the next time I code”. I leave sessions by saying, “that was cool, I can talk about this at a happy hour”. If I could do the former, I’d probably attend most/all the sessions.
- “Security people” think “developers” don’t know what they are doing and don’t care about security. Which to developers is offensive. We like to write secure applications; sometimes we just find the bugs too late….
- I’ve gone to, I think, 4 OWASP meetings. Of those, I probably would only have recommended one of them to others – Michael Howard’s. I think it helped that he was a well-known speaker and seemed to have a developer focus. So, well-known speakers, with a compelling and relevant subject.. Even then, the time has to be weighed against other priorities. For example, today’s meeting sounds interesting, but not particularly relevant. I’ll probably skip it.
In the end, the content at these meetings is more for security pros like pen testers, or for tool buyers in security or sysadmin groups. “How do I code more securely” is the alleged point of the group but frankly 90% of the activity is around scanners and crackers and all kinds of stuff that is fine but should be simple testing steps after the code’s written securely in the first place.
As a result there have been interesting ideas coming from the security community that are reminiscent of DevOps concepts. Pen tester @atdre did a talk here to the Austin OWASP chapter about how security testers engaging with agile teams “from the outside” are failing, and shouldn’t we instead embed them on the team as their “security buddy.” (I love that term. Security buddy. I hate my “compliance auditor” without even meeting the poor bastard, but I like my security buddy already.) At the OWASP convention LASCON, Matt Tesauro delivered a great keynote similarly trying to refocus the group back on the core problem of developing secure software; in fact, they’re co-sponsoring a movement called “Rugged” that has a manifesto similar to the Agile Manifesto but is focused on security, availability, reliability, et cetera. (As a result it’s of interest to us sysadmin types, who are often saddled with somehow applying those attributes in production to someone else’s code…)
The DevOps community is already running the risk of “leaving the devs behind” too. I love all my buddies at Opscode and DTO and Puppet Labs and Thoughtworks and all. But a lot of DevOps discussions have started to be completely sysadmin focused as well; a litany of tools you can use for provisioning or monitoring or CI. And that wouldn’t be so bad if there was a real entry point for developers – “Here’s how you as a developer interact with chef to deploy your code,” “Here’s how you make your code monitorable”. But those are often fringe discussions around the core content which often mainly warms the cockles of a UNIX sysadmin’s heart. Why do any of my devs want to see a presentation on how to install Puppet? Well, that’s what they got at a recent Austin Cloud User Group meeting.
As a result, my devs have stopped coming to DevOps events. When I ask them why, I get answers similar to the ones above for why they’re not attending OWASP events any more. They’re just not hearing anything that is actionable from the developer point of view. It’s not worth the two hours of their valuable time to come to something that’s not at all targeted at them.
And that’s eventually going to scuttle DevOps if we let it happen, just as it’ll scuttle OWASP if it continues there. The core value of agile is PEOPLE over processes and tools, COLLABORATION over negotiation. If you are leaving the collaboration behind and just focusing on tools, you will eventually fail, just in a more spectacular and automated fashion.
The focus at DevOpsDays US 2010 was great, it was all about culture, nothing about tools. But that culture talk hasn’t driven down to anything more actionable, so tools are just rising up to fill the gap.
In my talk at that DevOpsDays I likened these new tools and techniques to the introduction of the Minie ball to rifles during the Civil War. In that war, they adopted new tools and then retained their same old tactics, walking up close in lines designed for weapons with much shorter ranges and much lower accuracy – and the slaughter was profound.
All our new DevOps tools are great, but in the same way, if we don’t adapt our way of thinking to them, they will make our lives worse, not better, for all their vaunted efficiency. You can do the wrong thing en masse and more quickly. The slaughter will similarly be profound.
A sysadmin suddenly deciding to code his own tools isn’t really the heart of DevOps. It’s fine and good, and I like seeing more tools created by domain experts. But the heart of DevOps, where you will really see the benefits in hard ROI, is developers and operations folks collaborating on real end-consumer products.
If you are doing anything DevOpsey, please think about “Why would a developer care about this?” How is it actionable to them, how does it make their lives easier? I’m a sysadmin primarily, so I love stuff that makes my job easier, but I’ve learned over the years that when I can get our devs to leverage something, that’s when it really takes off and gives value.
The same thing applies to the people on the security side. Why do we have this huge set of tools and techniques, of OWASP Top 10s and Live CDs and Metasploits and about a thousand wonderful little gadgets, but code is pretty much as shitty and insecure as it was 20 years ago? Because all those things try to solve the problem from the outside, instead of targeting the core of the matter, which is developers developing secure code in the first place. And to do that, it’s more of a hearts-and-minds problem than a tools-and-processes problem.
That’s a core realization that operations folks, and security folks, and testing folks, and probably a bunch of other folks need to realize, deeply internalize, and let it change the way they look at the world and how they conduct their work.
Recently I have been reading on OPSEC (operations security). OPSEC, among many things, is a process for security critical information and reducing risk. The 5 steps in the OPSEC process read as follows:
- Identify Critical Information
- Analyze the Threat
- Analyze the Vulnerabilities
- Assess the Risk
- Apply the countermeasures
It really isn’t rocket science, but it is the sheer simplicity of the process that is alluring. It has traditionally been applied in the military and has been used as a meta-discipline in security. It assumes that other parties are watching, sort of like the aircraft watchers that park near the military base to see what is flying in and out, or the Domino’s near the Pentagon that reportedly sees a spike in deliveries to the Pentagon before a big military strike. Observers are gathering critical information on your organization in new ways that you weren’t able to predict. This is where OPSEC comes in.
Since there is no way to predict what data will be leaking from your organization in the future and it is equally impossible to enumerate all possible future risk scenarios, then it becomes necessary to perform this assessment regularly. Instead of using an annual review process with huge overhead and little impact (I am looking at you, Sarbanes-Oxley compliance auditors), you can create a process to continue to identify risks in an ever-changing organization while lessening risk. This is why you have a security team, right? Lessening the risk to the organization is the main reason to have a security team. Achieving PCI or HIPPA compliance is not.
Using OPSEC as a security process poses huge benefits when aligned with Agile software development principles. The following weekly assessment cycle is promoted by SANS in their security training course. See if you can see Agile in it.
The weekly OPSEC assessment cycle:
- Identify Critical Information
- Assess threats and threat sources including: employees, contractors, competitors, prospects…
- Assess vulnerabilities of critical information to the threat
- Conduct risk vs. benefit analysis
- Implement appropriate countermeasures
- Do it again next week.
A weekly OPSEC process is a different paradigm from the annual compliance ritual. The key of the security is just that: lessen risk to the organization. Iterating through the OPSEC assessment cycle weeklymeans that you are taking frequent and concrete steps to facilitate that end.