Author Archives: wickett

About wickett

James is a leader in the DevOps and InfoSec communities--most of his research and work is at the intersection of these two communities. He is a supporter of the Rugged Software movement and he coined the term Rugged DevOps. Seeing the gap in software testing, James founded an open source project, Gauntlt, to serve as a Rugged Testing Framework. He is the author of the Hands-on Gauntlt book. He got his start in technology when he founded a Web startup as a student at University of Oklahoma and since then has worked in environments ranging from large, web-scale enterprises to small, rapid-growth startups. He is a dynamic speaker on topics in DevOps, InfoSec, cloud security, security testing and Rugged DevOps. James is the creator and founder of the Lonestar Application Security Conference which is the largest annual security conference in Austin, TX. He is a chapter leader for the OWASP Austin chapter and he holds the following security certifications: CISSP, GWAPT, GCFW, GSEC and CCSK and he serves on the GIAC Advisory Board. In his spare time he is raising kids and trying to learn how to bake bread.

Product Security the Netflix Way with Jason Chan

Jason Chan did a great presentation at LASCON last fall in Austin on product security at Netflix offically titled ‘From Gates to Guardrails: Alternate Approaches to Product Security.’  You may have even seen the Agile Admin’s coverage of LASCON and @ernestmueller‘s interview of Jason Chan.  The LASCON videos are now online and we thought we would share some of our favorites from the conference.

Leave a comment

by | January 18, 2014 · 8:30 am

Trusted Software Alliance launches new podcast and news series

The Trusted Software Alliance News Network launched this week and is featuring 5 minute daily doses of AppSec and DevOps news. The show is run by @eusp along with weekly co-hosts @damonedwards@cote and yours truly (@wickett).  Check out the inaugural post and follow the blog at

Leave a comment

by | January 17, 2014 · 9:56 am

The Agile Admin at SXSW, we need your help

Right in the Agile Admin’s hometown (Austin, TX) is one of the coolest conferences out there–it is a special place where hipsters and venture capitalists and programmers and designers and gamers unite.  The Agile Admin team is always at SXSW usually in search of new tech and ideas and more often in search of free drinks. This year is gonna be different. This year, James submitted a talk on Rugged Driven Dev and if the talk gets enough votes, the Agile Admin will be represented in the SXSW Interactive lineup.

We need your vote to make it happen. We would love to help the aforementioned hipsters find out about all the cool stuff going on in the Rugged and DevOps communities and bring them into the fold.

Would you vote for the talk?  It only takes a few seconds to create an account and vote.  You can cast your vote here >


Filed under DevOps

Notes and Tweets from DevOps Days Silicon Valley

Over the last few months I have been using TweetScriber (an iPad app) to take notes at conferences. The really nice part about it is that it is a note taking application that allows you to live-tweet and record other people’s tweets all in one place. At DevOps Days Silicon Valley 2013, I tried to use TweetScriber to record what happened and capture what others were saying on twitter as well.

Here are my raw notes from DevOps Days Silicon Valley Day 1 and DevOps Days Silicon Valley Day 2. I also ran an open space on doing security testing with gauntlt and recorded those notes as well.

The Agile Admin team is working on putting together a summary of DevOps Days and Velocity Conference, but until that is released the raw notes will have to suffice.

Leave a comment

Filed under DevOps

Operations Level Up Storify Notes from @wickett

These are some notes from the Operations Level Up talk at the Velocity 2013 Conference. The Agile Admin crew is out at Velocity Conference this year and live-blogging as we go.

Leave a comment

by | June 18, 2013 · 12:47 pm

Chef your haproxy load balancer and add encryption

As of last September, HAProxy supports ssl so you no longer have to put stud/stunnel/nginx in front of HAProxy and it can also connect to SSL on backend servers so you can have encrypted traffic the whole way to the app server.  Most people decrypt on the load balancer and then pass it to their app servers unencrypted but I am not a big fan of that architecture.  This post shows you how to set up HAProxy with chef and we will be setting up ssl all the way to the app servers. Big thanks to @jtimberman for his post on encrypted data bags which helped me figure this out.

Setup your Chef encrypted data bag to store your ssl cert

The first step is to create a secret key for your data bag to use. This will be used to encrypt your data bag and later by chef nodes to decrypt the data bag so that they can read from the data bag. Do not store the encrypted_data_bag_secret in source control as-is. Instead, you can put this into a keepass database and then store that in source control if you want to.

openssl rand -base64 512 > ~/.chef/encrypted_data_bag_secret

Next you have to create the databag which we have aptly called secrets

knife data bag create secrets

Now we can store our wildcard cert in the secrets databag. This will open an editor and you can copy and paste your cert and key into it. This will go to the chef server and not on local disk. I set these id, cert and key.

knife data bag create secrets wildcard --secret-file ~/.chef/encrypted_data_bag_secret

The last step uploaded your wildcard cert to the chef server and encrypted it.   The next step allows us to save off the json export of our encrypted wildcard cert which we can check into source control and version.  Later if we get in a bind we can tell chef to import the databag using this json export.

mkdir data_bags/secrets
knife data bag show secrets wildcard -Fj > data_bags/secrets/wildcard.json

This next step is to just do a sanity check to make sure the databag export looks good.  It should look like this:

cat data_bags/secrets/wildcard.json { "id": "wildcard", "cert": "encrypted string here", "key": "encrypted string here" }

Create your own wrapper cookbook

Now in your chef cookbook you can access this wildcard cert. The next step requires you to write your own wrapper cookbook which doesn’t do very much other than set default attributes, pull the wildcard cert from the databag, write it to a file and then call the haproxy cookbook to do the install.  (My cookbook for this is in a private github repo because we do some custom steps and set some settings that don’t apply to everyone, but if you create a new cookbook and follow these steps, you should be set.)
Create a cookbook

knife cookbook create my-loadbalancer

Next, change the recipes/default.rb to look like this:

# Pull the certs from the encrypted databag
wildcard_cert = Chef::EncryptedDataBagItem.load("secrets","wildcard")
my_cert = wildcard['cert'].chomp # you may not need this chomp, but I did
my_key = wildcard['key'].chomp # you may not need this chomp, but I did
# feed the cert and key into the chef template
template "/etc/ssl/private/haproxy.pem" do
source "haproxy_pem.erb"
owner "root"
group "root"
mode 0400
variables(:wildcard_key => my_key,
:wildcard_crt => my_cert)
# Install haproxy and we are using a forked version of haproxy to install 1.5-17 from source and add SSL
include_recipe "haproxy::app_lb"

Add this template to your cookbook. The template we used for this file haproxy.pem is pretty basic. Here are the contents of templates/default/haproxy_pem.erb

<%= @wildcard_crt %>
<%= @wildcard_key %>

The line that calls include_recipe “haproxy::app_lb” is actually installing our forked version of the haproxy cookbook which adds the below line to the file templates/default/haproxy-app_lb.cfg.erb to setup ssl binding.

bind ssl crt /etc/ssl/private/haproxy.pem

You can check out our fork for the chef-haproxy cookbook to see how we install from source, what default attributes you can set and how we have our haproxy.cfg template using the ssl certs.

To recap, we uploaded our cert to an encrypted databag, added a recipe to pull that out and put it in a file (haproxy.pem) and we changed the haproxy cookbook to use that file to handle ssl certs. Hope this helps and if you run into any problems let me know.


Filed under DevOps, Security

Monitoring Sucks but Alerting is Beautiful

I (@wickett) work as the Cloud Ops Team Lead at National Instruments where we have several Software as a Service products that we have built on different cloud providers (AWS, Azure, Google) and have implemented with a host of other supporting SaaS tools (cloudkick, ZenDesk, AlertSite and PagerDuty plus several others).  When building out our products we decided to eat our own dog food and use SaaS solutions as much as possible.  Great, but where am I going with all this?

No matter what tools you are using to monitor or log on your systems, you need a reliable way to get actionable events to your Ops Team.  If you have implemented several types of monitors, you generally are setting up an email address for them to send alerts to.  Then you write scripts to forward those to on call devices or forwarding rules to turn them into SMS–not exactly state of the art.  Try mixing in a global on call rotation and trying to configure all of your monitoring tools to account for that and it becomes a big problem.

Enter PagerDuty–by far this is the best SaaS product we use in our day-to-day Operations team.  PagerDuty is an alerting tool that is simple, easy-to-use and integrates into your other systems.   Why is PagerDuty so awesome?  Well, I am glad you asked.

  1. User defined escalation.  Once an alert gets sent into PagerDuty, it is processed through our escalation pathway.  It determines who is the first level of support and begins to alert that person.  Here is the cool part, I can choose to be alerted however I want and other ops team members can choose however they want.  For me, I get an email after 1 minute, SMS alerts at 3 minute intervals for the next 9 minutes, then phone calls every 5 minutes for the next 20 minutes.  Lets say you are a hard sleeper then you might want to skip the SMS and move straight to phone calls.  If I don’t acknowledge the alert in 30 minutes, it will get escalated to the next ops team member.
  2. Alert Acknowledgement.  From any one of the alert mechanisms above, there is an in-kind way to acknowledge and resolve the alert.  I can reply to the SMS message with an ACK code or when I get a call from the PagerDuty version of Siri I can select a response right on my keypad at that time.  No time is lost logging into PagerDuty to acknowledge the alert and the ops team can just get busy responding.
  3. Equality of alerts.  This is a subtle one.  We have a policy on our team that all alerts are equal and need to be handled with the same care and diligence.  Anything that makes it to PagerDuty is treated with the same level of importance and is escalated through the same channel–no “you can just ignore those alerts from that system over there” syndrome on my team.  All alerts are escalated and all must be handled.
  4. API integration.  PagerDuty lets you integrate with tools you already use (e.g. nagios, zenoss, cloudkick, splunk) and those tools can open and close alerts as they are detected and/or resolved.
  5. Email integration. Even if you have created some code or monitor that you want to alert from that doesn’t integrate with the API then all you need to be able to do is send an email.  Once that email is received, PagerDuty will treat it as an alert.
  6. Global 24/7 tools.  PagerDuty works in lots of countries and has scheduling that allows for follow-the-sun ops teams to thrive.  I am on call every day for 8 hours and after my shift ends one of my other global ops team members is on call (@einsamsoldat and @hafizramly) for the next 8 hours–at which point I am bumped to the second tier escalation path.  Most tools miss this and for our team this is a huge benefit.

I had initially thought I would just write a quick paragraph or two about why using PagerDuty for alerting is great for your devops team.  But, I am such a big fan that I couldn’t resist coming up with more reasons why we love PagerDuty on our team.  The biggest reason of all is that as an Ops team manager, I can sleep at night knowing that all alerts will get handled and I won’t woken up with a phone call from a VP or marketing person telling me that our cloud products are down–well at least not because of a failure to handle alerts.

I would encourage you to stop using subpar alerting mechanisms in monitoring and logging tools that don’t treat alerting as a first class citizen.  Those tools were’t created to be awesome at alerting they were meant to be detective in nature.  PagerDuty is made for alerting and defintely deserves a spot in your devops toolchain.


Leave a comment

Filed under DevOps

Up and running with Vagrant

Last night I gave a 5 minute lightning presentation on Vagrant for the Austin Cloud User Group’s December meeting which was aptly titled “The 12 Clouds of Christmas.”  These ’12 clouds’ fleshed out into 12 lightning talks on different clouds and implementation thereof.  The format was great and thought that it allowed everyone to get exposure to new tech.

Below are the slides from the demo.  Slides 9 and 10 are where I showed the actual setup (Vagrantfile, rvm), virtual box console and ran vagrant commands.  Squint real hard and tilt your head to the right and maybe you can envision the actual demo portion of the talk…  Or if your imagination fails you, you can watch some random vagrant demos on youtube.

Leave a comment

Filed under Uncategorized

Rugged Software Manifesto: An Interview With Dan Cornell

I had a chance to talk with Dan Cornell from the Open Web Application Security Project (OWASP) and the Denim Group.  Dan has over 13 years of experience in development and is one of the founders at the Denim Group, a company that does security consulting day in and day out for all types of clients. Dan wrote Sprajax, a security analyzer for AJAX applications, and is the chair of the OWASP Global Membership Committee.

While a lot of us work with security, it is usually in addition to other dev or ops responsibilities and isn’t our sole focus.  However, Dan Cornell and his company are in the weeds of app security daily.  It is a pleasure to have him on the agile admin blog and we are chatting with him on his thoughts on the Rugged Software Manifesto. We ran across this manifesto at the Lonestar Application Security Conference (LASCON) and heard it referred to by both OWASP board members and Department of Homeland Security types so we thought we should look it up, and it seemed interesting.  So here’s Dan’s thoughts on the subject.

@wickett: Hi Dan, welcome to the agile admin blog.  Our first question for you is, what do we mean by Rugged Software?

@danielcornell:  I think of software that is going to run as intended, regardless of the conditions it encounters when it is deployed.  A big part of this is security, but that isn’t the only thing.  You also have to take into account reliability, survivability and a variety of other ‘-ity’ properties.  The Internet, and networks in general, are dangerous places where you need to be conscious of both malicious actors as well as other, unintentional adverse conditions such as service outages, congestion, untrained users making mistakes and so on.  Rugged Software is software that is designed and built to deal with these conditions – foreseen or not.

@wickett: When we have talked about Rugged Software with developers, we get a fair amount of pushback and hear statements like, “Rugged Software isn’t really real and that it is just a PR play.”  Some developers and engineers we;ve talked to say that it lacks depth, clarity and process, but I wonder if these engineers have even seen the original Agile Manifesto.  In my mind, I look back at the original Agile Manifesto and I am equally under-whelmed by it as I imagine they are by the Rugged Software Manifesto.  Do you see any parallels between the original Agile Manifesto and the Rugged Software Manifesto?

@danielcornell: The two biggest challenges I see for the Rugged Software movement are to provide a business justification for being Rugged to management and to answer the “so what?” question for developers.  Dealing with the first will go a long way toward solving the second.

If you ask most executives and managers if they’d like their software to be Rugged they’ll likely give an answer somewhat along the lines of: “Sure I’d love to be ‘Rugged.’  And ‘Agile’ and ‘Pragmatic’ and whatever other buzzwords the industry has come up with lately.  All that stuff is great but what I really want is to be on time and on budget and I want more features than our competitors.”  Thinking about security and quality and performance and anything else comes later.

So this is an area where the Rugged movement needs to do a better job of explaining why Rugged makes good business sense and you run into the same challenges that information security folks have had making a business justification for security for years.  Allowing an organization to avoid pain when the auditors or compliance folks come around is of value – but it often isn’t enough.  Reducing downtime and disruption in the future because you have avoided a breach or outage is of value – but it often isn’t enough.  David Rice has done some great thinking in this area in his book Geekonomics but, as an industry, I think we are still looking for the universal truth to be revealed that will make everyone sit up and take notice and I think we’re going to be waiting for a long time.

That said, some organizations have made the decision that this is an area that merits focus and when executives and managers make Rugged (or security or whatever) a priority then it is much easier to get the troops to fall in line.  I’m reminded of a secure coding training class I ran recently for an ISV where the VP in charge of engineering stood up at the beginning of the class and said “This is a priority for us and the material in this class represents the base level of knowledge everyone is expected to have.”  Then he sat in the entire two day class and at the end told his people “I hope you paid attention because you are going to be held accountable for knowing and implementing this material.”  Wow!  This was one of the best classes I taught in a long time; everyone showed up on time, paid attention, asked questions and so on.  I’d love to think it is because I’m such a funny and engaging guy but what really made the difference is that the leadership demonstrated with actions that security is a priority.  A similar model is always going to be the most effective way to drive Rugged into an organization.  Bottom-up, grass roots efforts are valuable because they raise awareness and help to identify like-minded potential champions, but to drive adoption at scale management needs to feel that this is a priority.

Speaking of priorities, I think this is an area where the Agile Manifesto style can provide some guidance.  The Rugged Manifesto is a great call to arms, but it is very self-referential: “I am Rugged… and more importantly my code is Rugged”  And so on.  If you look at the Agile Manifesto it calls out trade offs: “Individuals and interactions over processes and tools,” for example.  It accepts that processes and tools have merit, but chooses to value individuals and interactions more.  Acknowledging that decisions must be made and providing guidance on why one is superior to another is a stronger basis for a conversation and this is going to be key.  Right now those trade-offs are presented as “Prefer code without vulnerabilities over code with vulnerabilities”  Duh!  Who wouldn’t want that?  But the real question is what are they going to have to give up to get it.

@wickett: Over the next five years, do you see the Rugged Software Manifesto becoming a standard?  Will we have Rugged development practices and be operating on a new paradigm for development or will it get lumped under Agile?

@danielcornell: Well the Rugged Software Manifesto is just that – a Manifesto.  It isn’t a standard.  And I don’t think we are ever going to see a standard for software development because “development” is too big a thing to be done in a standard way.  You can’t expect folks building banking applications for East Coast financials to follow the same process as a couple of folks in a garage trying to build the next Facebook.  What I hope is developed and adopted, however, are so-called “best practices” such as threat modeling, static analysis for quality and security, security code review, penetration testing and so on.

Also, Agile and Rugged are two separate movements.  Agile is about creating customer-centric software in a flexible and timely manner and Rugged is about making software that doesn’t break.   Their goals are not exclusive, but they’re also different.

@wickett: When thinking about Rugged Software, what are practical steps that individual developers can adopt into their current practices whether they use Agile, Waterfall, or other development methodology?

@danielcornell: Well first I think it is important to note that individual developers can have a positive impact on their own code, but this is going to be largely ineffective if the rest of the development team doesn’t share the same goals.  The “weakest link in the chain” analogy comes to mind.  That said…

Threat Modeling is a practice that can be adopted into any development methodology.  There are lots of resources out there that explain how to use it to improve the security of complicated systems and it can also be used for the greater purpose of Rugged software by asking questions such as “what if this data flow stop flowing” or “what if this data store becomes unavailable”  This is best done by the team as a whole but it is also a practice that an individual developer can use on their own portions of a system.

Penetration testing and resiliency testing are other practices that teams can implement to help test the Ruggedness of their applications and systems.

@wickett: What open source tools do you think should fit into the tool chain for a rugged developer?  What about for a rugged operations/sysadmin?

@danielcornell: Static analysis tools like FindBugs, PMD and FxCop can be really valuable.  For dynamic testing a web proxy like WebScarab can be really useful to testing how software is going to respond to unexpected inputs.

On the operations/sysadmin thing a critical Rugged practice is to proactively monitor systems in order to anticipate problems.  From the standpoint, you should be instrumenting systems and not just tracking failures.  When an HTTP server goes down it is often too late to prevent an outage and that isn’t very Rugged.  Instead you also need to be tracking disk and processor usage, traffic and response time trends and so on in order to be able to anticipate problems.  Fortunately there are a lot of open source tools available for DIY folks as well as SaaS offerings that can make this sort of instrumentation less of a hill to climb that it was in the past.

@wickett: We appreciate having time with Dan and hearing from an industry leader on Rugged Software.  To keep up Dan on the interweb, you can follow him on twitter or check out his blog.

Leave a comment

Filed under Security

The Rise of the Security Industry

In late 2007 Bruce Schneier, the internationally renowned security technologist and author, wrote an article for IEEE Security & Privacy. The ominously named article: The Death of the Security Industry predicted the future of the security industry or lack thereof.  In it he predicts that we would treat security as merely a utility like we use water and power today.  The future is one where “large IT departments don’t really want to deal with network security. They want to fly airplanes, produce pharmaceuticals, manage financial accounts, or just focus on their core business.”

Schneier closes with, “[a]s IT fades into the background and becomes just another utility, users will simply expect it to work. The details of how it works won’t matter.”

Looking back 3 years and having the luxury of hindsight, it is understandable to see why he thought the security industry would become a utility.  In part, it has become true.  Utility billing is the rage for infrastructure (hello cloud computing) and more and more people are viewing the network as a commodity.  Bandwidth has increased in performance and decreased in cost.  Continually people are outsourcing pieces of their infrastructure and non-critical IT services to vendors or to offshore employees.

But there are three reasons why I disagree with the The Death of the Security Industry and I believe we are actually going to see a renaissance of the security industry over the next decade.

1. Data is valuable. We can’t think of IT as merely the computers and network resources we use.  We need to put the ‘I’ back in IT and remember why we play this game in the first place.  Information.  Protecting the information (data) will be crucial over the long haul.  Organizations do not care about new firewalls or identity management as a primary goal, however they do care about their data.  Data is king.  Organizations that succeed will be ones that master navigating a new marketplace that values sharing while keeping their competitive edge by safe-guarding and protecting their critical data.

2. Security is a timeless profession. When God gave Adam and Eve the boot from the Garden of Eden, what did he  do next?   He used a security guard to keep them out of the Garden for good.  Security has been practiced as long as people have been people.  As long as you have something worth protecting (see ‘data is valuable’ in point 1) you will need resources to protect it.   Our valuable data is being transferred, accessed and modified on computing devices and will need to be protected.  If people can’t trust that their data is safe then they will not be our customers.  The CIA security triad (Confidentiality, Integrity, and Availability) needs to remain in tact for consumers to trust organizations with their data and if that data has any value to the organization, it will be need to be protected.

3. Stuxnet. This could be called the dawn of a new age of hacking.  Gone are the days of teenagers running port scans from their garages. Be ready to start seeing hackers using sophisticated techniques that simultaneously attack multiple vectors to gain access on their targets.  I am not going to spread FUD (Fear Uncertainty and Doubt) around, but I believe that Stuxnet is just the beginning.

In addition to how Stuxnet was executed, it is just as interesting to see what was attacked.  This next decade will prove to be a change in the type of targets attacked.  In the 80’s it was all about hacking phones and more physical targets, the 90’s were the days of the port-scanning and Microsoft Windows hacking, the last decade has primarily focused on web and application data.  With Stuxnet, we are seeing the revitalization of hacking where it is returning to its roots of hacking targets that are physical in nature such as SCADA systems that control a building’s temperature systems.  The magazine 2600 has been publishing a series on SCADA hacking over the last 18 months.  What makes it even more interesting is that almost every device you buy these days has a web interface on it, so never fear, the last 10 years spent hacking websites will come in real handy when looking at hacking control systems.

In closing, I think we are a long way off from seeing the death of the security industry.  As our data becomes more valuable, the more we will need to secure.  Data is on the rise and with it comes the need for security.  Additionally as more and more of our world is controlled with computers, the targets become more and more interesting.  Be ready for the rise of the security industry.

Let me know what you think on twitter: @wickett

1 Comment

Filed under Security