Author Archives: wickett

About wickett

James is a leader in the DevOps and InfoSec communities--most of his research and work is at the intersection of these two communities. He is a supporter of the Rugged Software movement and he coined the term Rugged DevOps. Seeing the gap in software testing, James founded an open source project, Gauntlt, to serve as a Rugged Testing Framework. He is the author of the Hands-on Gauntlt book. He got his start in technology when he founded a Web startup as a student at University of Oklahoma and since then has worked in environments ranging from large, web-scale enterprises to small, rapid-growth startups. He is a dynamic speaker on topics in DevOps, InfoSec, cloud security, security testing and Rugged DevOps. James is the creator and founder of the Lonestar Application Security Conference which is the largest annual security conference in Austin, TX. He is a chapter leader for the OWASP Austin chapter and he holds the following security certifications: CISSP, GWAPT, GCFW, GSEC and CCSK and he serves on the GIAC Advisory Board. In his spare time he is raising kids and trying to learn how to bake bread.

AppSec in the Modern Era

I recently wrote an article for Signal Sciences discussing the top 5 application security defense needs in the modern era. It’s very DevOps in nature. You can see the full article in all of its original glory here > Top 5 AppSec Defense needs in the Modern Era

 

In the article, I covered what I thought was the most critical things needed for a plausible application security program in the modern era.

  1. OWASP Top Ten coverage is a must-have and is expected
  2. Have Defense against Bots and Scrapers
  3. Do Business Logic monitoring
  4. Achieve Operational Insight through Visualizations and Dashboards
  5. Distribute security information where people naturally are, a la ChatOps.

Thoughts, comments?  Hit me up on twitter (@wickett) or in the comments on the original article > Top 5 AppSec Defense needs in the Modern Era

This is a sample of putting visualizations behind your application security defense. These images are provided from Signal Sciences

 

 

Leave a comment

Filed under DevOps

Classy up your curl with curl-trace

 

Let’s say you are debugging some simple web requests and trying to discern where things are slowing down.  Curl is perfect for that.  Well, sort of perfect. I don’t know about you but I forget all the switches for curl to make it work like I want.  Especially in a situation where you need to do something quickly.

Let me introduce you to curl-trace.

It’s not a new thing to install, its just an opinionated way to run curl.  To give you a feel for what it does, lets start with the output from curl-trace.

Screenshot 2016-03-11 10.15.35

As you can see, this breaks up the request details like response code, redirects and IP in the Request Details section and then breaks down the timing of the request in the Timing Analysis section.  This uses curl’s --write-out option and was inspired by this post, this post, and my co-worker Marcus Barczak.

The goal of curl-trace is to quickly expose details for troubleshooting web performance.

How to setup curl-trace

Step 1

Download .curl-format from github (or copy from below)

\n
 Request Details:\n
 url: %{url_effective}\n
 num_redirects: %{num_redirects}\n
 content_type: %{content_type}\n
 response_code: %{response_code}\n
 remote_ip: %{remote_ip}\n
 \n
 Timing Analysis:\n
 time_namelookup: %{time_namelookup}\n
 time_connect: %{time_connect}\n
 time_appconnect: %{time_appconnect}\n
 time_pretransfer: %{time_pretransfer}\n
 time_redirect: %{time_redirect}\n
 time_starttransfer: %{time_starttransfer}\n
 ----------\n
 time_total: %{time_total}\n
 \n

And put that in your home directory as .curl-format or wherever you find convenient.

Step 2

Add an alias to your .bash_profile (and source .bash_profile) for curl-trace like this:


alias curl-trace='curl -w "@/path/to/.curl-format" -o /dev/null -s'

Be sure to change the /path/to/.curl-format to the location you saved .curl-format. Once you do that and source your .bash_profile you are ready to go.

Usage

Now you can run this:

$ curl-trace https://google.com

Or follow redirects with -L

$ curl-trace -L https://google.com

Thats it…

Now you are ready to use curl-trace. If you have anything to add to it, just send me an issue on github or a PR or ping me on twitter: https://twitter.com/wickett.

Enjoy!

UPDATE: 3/17/2016

There was a lot of good feedback on curl-trace so it has now been moved to its own repo: https://github.com/wickett/curl-trace

 

2 Comments

Filed under DevOps

RSAC gets down with the Rugged DevOps!

This year at RSAC—you know RSA, its the really big InfoSec conference that isn’t BlackHat/DefCon—there was a mini-conference on Rugged DevOps. For the last few years they have been featuring this mini-conference as a way to bring DevOps into the InfoSec community.

I did a writeup of the event over on Medium and I hope you find it interesting. One of my favorite parts of the event is summed up from that article:

To give you a feel for how well it went, I think it is easily summed up with what happened at the closing. To a mostly full room of about 500 people the question was asked, “How many of you have been here all day?” Over 80% of the hands went up. For being a conference within a conference that number is surprising, for doing that with the InfoSec crowd, it is proof that the industry culture is truly shifting.

Security is ready to join the DevOps tribe.  It’s our duty as stewards of DevOps to do this right.

In the article on Medium I link to all the talks and speakers at the event. Check out the presos on SlideShare, follow the speakers on Twitter and most importantly be part of the joining of the tribes.

Leave a comment

Filed under DevOps

Links on Bridging Security and DevOps

If you remember, I (@wickett) said I would be doing more blogging for Signal Sciences in the new year. We still are in January, but I am glad to say that so far so good. Here are a couple highlights from recent posts:

That’s all for now.  Happy Friday everyone!

Leave a comment

Filed under Conferences, DevOps, Security

In the New Year, resolve to bring Security to the DevOps party

Happy New Year!  May this be your year of much successful DevOps.

Last year I wasn’t too vocal about my work over at Signal Sciences. Mostly because I was too busy helping to rapidly build a NextGen Web Application Firewall as a SaaS from the ground up. This year you will be hearing a bit more as I am regularly contributing to the Signal Sciences blog (Signal Sciences Labs) over at Medium (sorry WordPress!).

I will try and occasionally link into some of my posts over there to The Agile Admin, around topics like:

  • The challenges we faced building a modern security product
  • Bridging the gap with Security and DevOps
  • Attack Driven Operations
  • and other Rugged DevOps topics…

Which brings me to the point of this post…

Bring Security to the DevOps party!

I am making a personal goal this year to bring security engineers, auditors, penetration testers and even those forensics folks to the devops party.  I have my sights mostly set on DevOps Days Austin as the event to physically bring people to (watch out Austin Security people!) but I am already crafting blog posts and many cunning tweets to also bring them over as well.  This year can you join me in trying to bridge this gap?

Last month I had the opportunity to do Sec Casts panel with these fine folks (all of which you should follow) on topics around devops and security:

 

If you don’t want to hear us go on for about an hour, you can read the write-up here. I mention this panel specifically because I think the topics brought up in it are directly impactful to the goal of bridging security and devops.  Maybe it will give you some ideas on how to bridge the gap in your own organization.

Happy New Year and lets make this the year that Security is finally brought into the DevOps fold.

Leave a comment

Filed under DevOps

Using Docker To Deliver Open Source Security Tools

Security tools are confusing to use but they are even worse to install. You often get stuck installing development packages and loads of dependencies just to get one working. Most of the tools are written by a single person trying to get something out the door fast. And most security tools want advanced privileges so they can craft packets with ease or install `-dev` packages themselves.

The traditional answer to this was either to install all the things and just accept the sprawl and privilege escalation, or install them in a VM to segregate them. VMs are great for some use cases but they feel non-native and not developer friendly as part of an open source toolchain.

I am familiar with this problem intimately as I am on the core team for Gauntlt. Gauntlt is an open source security tool that–are you ready?–runs other security tools. In addition to Gauntlt needing its own dependencies like ruby and a handful of ruby gems, it also needs other security attack tooling installed to provide value. For people just getting started with Gauntlt, we have happily bundled all this together in a nice virtualbox (see gauntlt-starter-kit) that gets provisioned via chef. This has been a great option for us as it allows first-timers the ability to download a completely functioning lab. When teaching gauntlt workshops and training classes we use the gauntlt starter kit.

The problem with the VM lifestyle is that while it’s great for a canned demo, it doesn’t expand to the real world so nicely.

Let’s Invoke Docker

While working with Gauntlt, we have learned that Gauntlt is fun for development but it works best when it sits in your Continuous Integration stack or in your delivery pipeline. For those familiar with docker, you know this is one thing that docker particularly excels at. So let’s try using docker as the delivery mechanism for our configuration management challenge.

In this article, we will walk through the relatively simple process of turning out a docker container with Gauntlt and how to run Gauntlt in a docker world. Before we get into dockering Gauntlt, lets dig a bit deeper into how Gauntlt works.

Intro to Gauntlt

Gauntlt was born out of a desire to “be mean to your code” and add ruggedization to your development process. Ruggedization may be an odd way to phrase this, so let me explain. Years ago I found the Rugged Software movement and was excited. The goal has been to stop thinking about security in terms of compliance and a post-development process, but instead to foster creation of rugged code throughout the entire development process. To that end, I wanted to have a way to harness actual attack tooling into the development process and build pipeline.

Additionally, Gauntlt hopes to provide a simple language that developers, security and operations can all use to collaborate together. We realize that in the effort for everyone to do “all the things,” that no single person is able to cross these groups meaningfully without a shared framework. Chef and puppet crossed the chasm for dev and ops by adding a DSL, and Gauntlt is an attempt to do the same thing for security.

gauntlt-flow

How Gauntlt Works

Gauntlt runs attacks against your code. It harnesses attack tools and runs them against your application to look for things like XSS or SQL Injection or even insecure configurations.

Gauntlt provides simple primitives to wrap attack tooling and parse the output. All of that logic is contained in what Gauntlt calls attack files. Gauntlt runs these files and exits with a pass/fail and returns a meaningful exit code. This makes Gauntlt a prime candidate for chaining into your CI/CD pipeline.

Anatomy of an Attack File

Attack files owe their heritage to the cucumber testing framework and its gherkin language syntax. In fact Gauntlt is built on top of cucumber so if you are familiar with it then Gauntlt will be really easy to grok. To get a feel for what an attack file looks like, let’s do a simple attack and check for XSS in an application.

Feature: Look for cross site scripting (xss) using arachni against example.com
Scenario: Using arachni, look for cross site scripting and verify no issues are found
 Given "arachni" is installed
 And the following profile:
 | name | value |
 | url | http://example.com |
 When I launch an "arachni" attack with:
 """
 arachni --checks=xss --scope-directory-depth-limit=1
 """
 Then the output should contain "0 issues were detected."

Feature is the top level description of what we are testing, Scenario is the actual execution block that gets run. Below that there is Given-When-Then which is the plain English approach of Gherkin. If you are interested, you can see lots of examples of how to use Gauntlt in gauntlt/gauntlt-demo inside the examples directory.

For even more examples, we (@mattjay and @wickett) did a two hour workshop at SXSW this year on using Gauntlt and here are the slides from that.

Downsides to Gauntlt

Gauntlt is a ruby application and the downside of using it is that sometimes you don’t have ruby installed or you get gem conflicts. If you have used ruby (or python) then you know what I mean… It can be a major hassle. Additionally, installing all the attack tools and maintaining them takes time. This makes dockerizing Gauntlt a no-brainer so you can decrease your effort to get up and running and start recognizing real benefits sooner.

Dockerizing an Application Is Surprisingly Easy

In the past, I used docker like I did virtual machines. In retrospect this was a bit naive, I know. But, at the time it was really convenient to think of docker containers like mini VMs. I have found the real benefit (especially for the Gauntlt use-case) is using containers to take an operating system and treat it like you would treat an application.

My goal is to be able to build the container and then mount my local directory from my host to run my attack files (*.attack) and return exit status to me.

To get started, here is a working Dockerfile that installs gauntlt and the arachni attack tool (you can also get this and all other code examples at gauntlt/gauntlt-docker):

FROM ubuntu:14.04
MAINTAINER james@gauntlt.org

# Install Ruby
RUN echo "deb http://ppa.launchpad.net/brightbox/ruby-ng/ubuntu trusty main" > /etc/apt/sources.list.d/ruby.list
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C3173AA6
RUN \
  apt-get update && \
  apt-get install -y build-essential \
    ca-certificates \
    curl \
    wget \
    zlib1g-dev \
    libxml2-dev \
    libxslt1-dev \
    ruby2.0 \
    ruby2.0-dev && \
  rm -rf /var/lib/apt/lists/*

# Install Gauntlt
RUN gem install gauntlt --no-rdoc --no-ri

# Install Attack tools
RUN gem install arachni --no-rdoc --no-ri

ENTRYPOINT [ "/usr/local/bin/gauntlt" ]

Build that and tag it with docker build -t gauntlt . or use the build-gauntlt.sh script in gauntlt/gauntlt-docker. This is a pretty standard Dockerfile with the exception of the last line and the usage of ENTRYPOINT(1). One of the nice things about using ENTRYPOINT is that it passes any parameters or arguments into the container. So anything after the containers name gauntlt gets handled inside the container by /usr/local/bin/gauntlt.

I decided to create a simple binary stub that I could put in /usr/local/bin so that I could invoke the container wherever. Yes, there is no error handling, and yes since it is doing a volume mount this is certainly a bad idea. But, hey!

Here is simple bash stub that can call the container and pass Gauntlt arguments to it.

#!/usr/bin/env bash

# usage:
# gauntlt-docker --help
# gauntlt-docker ./path/to/attacks --tags @your_tag

docker run -t --rm=true -v $(pwd):/working -w /working gauntlt $@

Putting It All Together

Let’s run our new container using our stub and pass in an attack we want it to run. You can run it without the argument to the .attack file as Gauntlt searches all subdirectories for anything with that extension, but lets go ahead and be explicit. $ gauntlt-docker ./examples/xss.attack generates this passing output.

@slow
Feature: Look for cross site scripting (xss) using arachni against scanme.nmap.org

  Scenario: Using arachni, look for cross site scripting and verify no issues are found # ./examples/xss.attack:4
    Given "arachni" is installed                                                        # gauntlt-1.0.10/lib/gauntlt/attack_adapters/arachni.rb:1
    And the following profile:                                                          # gauntlt-1.0.10/lib/gauntlt/attack_adapters/gauntlt.rb:9
      | name | value                  |
      | url  | http://scanme.nmap.org |
    When I launch an "arachni" attack with:                                             # gauntlt-1.0.10/lib/gauntlt/attack_adapters/arachni.rb:5
      """
      arachni --checks=xss --scope-directory-depth-limit=1 
      """
    Then the output should contain "0 issues were detected."                            # aruba-0.5.4/lib/aruba/cucumber.rb:131

1 scenario (1 passed)
4 steps (4 passed)
0m1.538s

Now you can take this container and put it into your software build pipeline. I run this in Jenkins and it works great. Once you get this running and have confidence in the testing, you can start adding additional Gauntlt attacks. Ping me if you need some help getting this running or if you have suggestions to make it better at: james@gauntlt.org.

Summary

If you have installed a ruby app, you know the process of managing dependencies–it works fine until it doesn’t. Using a docker container to keep all the dependencies in one spot makes a ton of sense. Not having to alter your flow and resorting to VMs makes even more sense. I hope that through this example of running Gauntlt in a container you can see how easy it is to abstract an app and run it almost like you would on the command line – keeping its dependencies separate from the code you’re building and other software on the box, but accessible enough to use as if it were just an installed piece of software itself.

Refs

1. Thanks to my buddy Marcus Barczak for the docker tip on entrypoint.

This article is part of our Docker and the Future of Configuration Management blog roundup.  If you have an opinion or experience on the topic you can contribute as well

3 Comments

Filed under DevOps

ShirtOps: How to Make T-shirts for Tech Conferences that People Actually Wear

Over the last 6 years I have helped organize over 10 different conferences (all the LASCON conferences, all the DevOpsDays Austin conferences, AppSec USA 2012, and even a couple for my church) and for most of the events I have been in charge of swag. T-shirts, bags, shot glasses, lanyards, usb keys… You name it, I have swagged it.

From all these conferences I have learned a few things, and specifically I have learned a bit about making t-shirts. T-shirts are a funny thing. Everyone has opinions, however as an organizer you have to learn that most of those opinions are wrong. I have had lots of bad ideas recommended to me by well-meaning organizers and friends: Print the logo big! Put all the sponsors logos on the back (also known as the “the NASCAR special”). Have a big design on the back which I like to call “the restaurant shirt.” Then there is the design someone on the team knocked out with MS Paint.

Everyone has good intentions, but as the one in charge of making the shirt you have to lead them through the process. Show the team what good actually means. In this presentation I highlight the last several years of DevOpsDays Austin t-shirts and walk you through the process of how to make t-shirts people want to wear after the event is over.

Links from the presentation:

If you have any other tips, add to the comments and/or tweet with #shirtops.

7 Comments

Filed under Conferences, DevOps

Use Gauntlt to test for Heartbleed

Heartbleed is making headlines and everyone is making a mad dash to patch and rebuild. Good, you should. This is definitely a nightmare scenario but instead of using more superlatives to scare you, I thought it would be good to provide a pragmatic approach to test and detect the issue.

@FiloSottile wrote a tool in Go to check for the Heartbleed vulnerability. It was provided as a website in addition to a tool, but when I tried to use the site, it seemed over capacity. Probably because we are all rushing to find out if our systems are vulnerable. To get around this, you can build the tool locally from source using the install instructions on the repo. You need Go installed and the GOPATH environment variable set.

go get github.com/FiloSottile/Heartbleed
go install github.com/FiloSottile/Heartbleed

Once it is installed, you can easily check to see if your site is vulnerable.
Heartbleed example.com:443

Cool! But, lets do one better and implement this as a gauntlt attack so that we can make sure we don’t have regressions and so that we can automate this a bit further. Gauntlt is a rugged testing framework that I helped create. The main goal for gauntlt is to facilitate security testing early in the development lifecycle. It does so by wrapping security tools with sane defaults and uses Gherkin (Given, When, Then) syntax so it easily understood by dev, security and ops groups.

In the latest version of gauntlt (gauntlt 1.0.9) there is support for Heartbleed–it should be noted that gauntlt doesn’t install tools, so you will still have to follow the steps above if you want the gauntlt attacks to work. Lets check for Heartbleed using gauntlt.

gem install gauntlt
gauntlt --version

You should see 1.0.9. Now lets write a gauntlt attack. Create a text file called heartbleed.attack and add the following contents:

@slow
Feature: Test for the Heartbleed vulnerability

Scenario: Test my website for the Heartbleed vulnerability (see heartbleed.com for more info)

Given "Heartbleed" is installed
And the following profile:
| name | value |
| domain | example.com |
When I launch a "Heartbleed" attack with:
"""
Heartbleed <domain>:443
"""
Then the output should contain "SAFE"

You now have a working gauntlt attack that can be hooked into your CI/CD pipeline that will test for Heartbleed. To see this example attack file on github, go to https://github.com/gauntlt/gauntlt/blob/master/examples/heartbleed/heartbleed.attack.

To run the attack

$ gauntlt ./heartbleed.attack

You should see output like this
$ gauntlt ./examples/heartbleed/heartbleed.attack
Using the default profile...
@slow
Feature: Test for the Heartbleed vulnerability

Scenario: Test my website for the Heartbleed vulnerability (see heartbleed.com for more info) # ./examples/heartbleed/heartbleed.attack:4
Given "Heartbleed" is installed # lib/gauntlt/attack_adapters/heartbleed.rb:4
And the following profile: # lib/gauntlt/attack_adapters/gauntlt.rb:9
| name | value |
| domain | example.com |
When I launch a "Heartbleed" attack with: # lib/gauntlt/attack_adapters/heartbleed.rb:1
"""
Heartbleed <domain>:443
"""
Then the output should contain "SAFE" # aruba-0.5.4/lib/aruba/cucumber.rb:131

1 scenario (1 passed)
4 steps (4 passed)
0m3.223s

Good luck! Let me (@wickett) know if you have any problems.

2 Comments

Filed under DevOps, Security

Stupid webappsec Tricks Talk at LASCON with Zane Lackey

Zane Lackey spoke at LASCON 2013 about how they do data driven security at Etsy. At the conference Ernest took some notes and blogged them in this post: . Now that the LASCON vids are out we thought this would be a good time to revisit this stellar talk. Enjoy!

Leave a comment

Filed under DevOps

Clean up your cookbook mess with meez

Is your kitchen a mess? Meez will help you get things straightened out.

There is a new gem in town, and it’s here to clean up the mess you made out of your cookbooks.  Its called meez.  

If you are like me, maybe you started writing some chef cookbooks, and then later decided to add some testing and you followed some blog posts to set up some different tools.  Some where along the way you figured out that the cool kids don’t use Librarian (although I still am fond of it) so you decide to use Berkshelf (I am learning to like it).  You also figured out that you need a linting tool and some sort of way to do TDD for your infrastructure. Man, this cookbook is starting to get pretty crowded with a bunch of files that have nothing to do with actually installing the code you want to install.  You also start looking around and wondering why you have to learn all these esoteric frameworks/tools to write a simple chef cookbook (technically you don’t have to, but the technohipsters frown on you if you don’t).

What are you to do?

Enter meez. Meez sets up an opinionated cookbook replete with all the testing tools and frameworks a modern chef requires: chefspec, foodcritic, rubocop, berkshelf, kitchenci, …  Once you tell meez to create a cookbook for you, it sets up all the different frameworks and gets you ready to start actually writing your recipes and working on your cookbook.  No more remembering how to setup all the testing tools and frameworks.  Sweet!


gem install meez
meez --cookbook-path /tmp -C "James Wickett" -m james@wickett.me mycookbook

What this will do is set up ‘mycookbook’ with all the testing tools you need.  By giving it my name and email, it autofills all that in the relevant spots as well.  Once meez finishes running, it tells you what to do next:


You must run `bundle install' to fetch any new gems.
Cookbook mycookbook created successfully
Next steps...
$ cd /tmp/mycookbook
$ bundle install
$ bundle exec berks install
$ bundle exec strainer test

Follow those steps and you are now ready to start working on cookbooks and stop worrying about all the testing frameworks and tools surrounding TDD and chef.

Meez was created as a gem after @pczarkowski‘s excellent sysadvent blog post “The Lazy SysAdmin’s Guide to Test Driven Chef Cookbooks.” Reading that will give you more context behind what meez is doing.

Moar Links

1 Comment

Filed under DevOps