Check out agile admin James Wickett’s talk from DeliveryConf last month on adding security into your continuous software delivery pipeline!
Check out agile admin James Wickett’s talk from DeliveryConf last month on adding security into your continuous software delivery pipeline!
I wanted to mention a couple Austin area events folks should be aware of – and one international one! November is full of DevOps goodness, so come to some or all of these…
The international one is called All Day DevOps, Tuesday November 15 2016, and is a one long day, AMER and EMEA hours, 3-track, free online conference. It has all the heavy hitter presenters you’d expect from going to Velocity or a DevOpsDays or whatnot, but streaming free to all. Sign up and figure out what you want to watch in what slot now! James, Karthik, and I are curating and hosting the Infrastructure track so, you know, err on that side 🙂 There’s nearly 5000 people signed up already, so it should be lively!
Then there’s CD Summit Austin 2016. There’s a regional IT conference called Innotech, and devops.com came up with the great idea of running a DevOps event alongside it. It’s Wednesday November 16 (workshops) and Thursday November 17 (conference) in the Austin Convention Center. All four of the Agile Admins will be doing a panel on “The Evolution of Agility” at 11:20 on Thursday so come on out! It’s cheap, even both days together are like $179.
But before all that – the best little application security convention in Texas (or frankly anywhere for my money) – LASCON is next week! Tues and Wed Nov 1-2 are workshop days and then Thu-Fri Nov 3-4 are the conference days. I’m doing my Lean Security talk I did at RSA last fall on Friday, and James is speaking on Serverless on Thursday. $299 for the two conference days.
Loads of great stuff for all this month!
The CloudAustin user group that Karthik, James, and I run is in its fifth year and still going strong. Our venue hosts at Rackspace now have the equipment to record the talks! So I thought I’d share the videos and slides with our readers. Thanks to Derrick Wippler and Mike Schwartz, our speakers, and Rackspace and CenturyLink, our sponsors.
What Are Containers And Why Are They So Important, by Derrick Wippler
Struggling to understand all the hype around Docker? Don’t understand the difference between a VM and a container? Why are immutable operating systems cool? Why is everyone going crazy over Kubernetes/Swarm/Apache Mesos?
This talk will attempt to inform by pulling back the curtain on the container hype. We will dissect what a container is, why clustering containers and orchestration matters, immutable operating systems and finally where this is all going and how it will effect your future interaction with the cloud.
Derrick Wippler is: Tech Geek, Container evangelist, Software Developer, Entrepreneur and Rackspace Cloud Block Storage Imagineer. Creator of a SuperNES emulator (http://www.superretro16.com). And you can read my musings on technology on my blog (http://thrawn01.org)
Who Are You? From Meat To Electrons And Back Again, by Mike Schwartz
Conventional wisdom tells us to use two-factor authentication—and it does help to improve security. But the best way to reduce user-friction is to never require a person to authenticate. This talk will provide a modern solution to reconcile these two divergent imperatives by leveraging standard profiles of OAuth2 for trust elevation. Its not just the front door that needs protection!
Mike Schwartz is the Founder of Gluu, a security software company serving companies, governments and universities around the world. Schwartz is a domain expert in application security, authentication and API access management. The Gluu Server is one of the leading implementations of OpenID Connect. Schwartz has participated in the development of standards like the User Managed Access (UMA) profile of OAuth2, a new standard for API access management. He is also Co-chair of the Open Trust Taxonomy for OAuth2 (OTTO) working group at Kantara to create new standards for multiparty federation. Before starting Gluu, Schwartz was a security integrator for many large enterprises. He also was the Founder of an ISP in the ’90s. He now resides with his family (and pigeons) in Austin, TX.
Does this make you want to speak at CloudAustin, or sponsor it? Well please do! Come email us at austin-cug-admin at googlegroups dot com and sign up. And of course come attend, we meet the third Tuesday night of every month at Rackspace’s Austin facility on I-35 at 183.
James and I have been talking lately about the conjunction of Lean and Security. The InfoSec world is changing rapidly, and just as DevOps has incorporated Lean techniques into the systems world, we feel that security has a lot to gain from doing the same.
We did a 20 minute talk on the subject at RSA, you can check out the slides and/or watch the video:
While we were there we were interviewed by Derek Weeks. Read his blog post with a transcript of the interview, and/or watch the interview video!
We’ll be writing more about it here, but we wanted to get a content dump out to those who want it!
Security tools are confusing to use but they are even worse to install. You often get stuck installing development packages and loads of dependencies just to get one working. Most of the tools are written by a single person trying to get something out the door fast. And most security tools want advanced privileges so they can craft packets with ease or install `-dev` packages themselves.
The traditional answer to this was either to install all the things and just accept the sprawl and privilege escalation, or install them in a VM to segregate them. VMs are great for some use cases but they feel non-native and not developer friendly as part of an open source toolchain.
I am familiar with this problem intimately as I am on the core team for Gauntlt. Gauntlt is an open source security tool that–are you ready?–runs other security tools. In addition to Gauntlt needing its own dependencies like ruby and a handful of ruby gems, it also needs other security attack tooling installed to provide value. For people just getting started with Gauntlt, we have happily bundled all this together in a nice virtualbox (see gauntlt-starter-kit) that gets provisioned via chef. This has been a great option for us as it allows first-timers the ability to download a completely functioning lab. When teaching gauntlt workshops and training classes we use the gauntlt starter kit.
The problem with the VM lifestyle is that while it’s great for a canned demo, it doesn’t expand to the real world so nicely.
While working with Gauntlt, we have learned that Gauntlt is fun for development but it works best when it sits in your Continuous Integration stack or in your delivery pipeline. For those familiar with docker, you know this is one thing that docker particularly excels at. So let’s try using docker as the delivery mechanism for our configuration management challenge.
In this article, we will walk through the relatively simple process of turning out a docker container with Gauntlt and how to run Gauntlt in a docker world. Before we get into dockering Gauntlt, lets dig a bit deeper into how Gauntlt works.
Gauntlt was born out of a desire to “be mean to your code” and add ruggedization to your development process. Ruggedization may be an odd way to phrase this, so let me explain. Years ago I found the Rugged Software movement and was excited. The goal has been to stop thinking about security in terms of compliance and a post-development process, but instead to foster creation of rugged code throughout the entire development process. To that end, I wanted to have a way to harness actual attack tooling into the development process and build pipeline.
Additionally, Gauntlt hopes to provide a simple language that developers, security and operations can all use to collaborate together. We realize that in the effort for everyone to do “all the things,” that no single person is able to cross these groups meaningfully without a shared framework. Chef and puppet crossed the chasm for dev and ops by adding a DSL, and Gauntlt is an attempt to do the same thing for security.
Gauntlt runs attacks against your code. It harnesses attack tools and runs them against your application to look for things like XSS or SQL Injection or even insecure configurations.
Gauntlt provides simple primitives to wrap attack tooling and parse the output. All of that logic is contained in what Gauntlt calls attack files. Gauntlt runs these files and exits with a pass/fail and returns a meaningful exit code. This makes Gauntlt a prime candidate for chaining into your CI/CD pipeline.
Attack files owe their heritage to the cucumber testing framework and its gherkin language syntax. In fact Gauntlt is built on top of cucumber so if you are familiar with it then Gauntlt will be really easy to grok. To get a feel for what an attack file looks like, let’s do a simple attack and check for XSS in an application.
Feature: Look for cross site scripting (xss) using arachni against example.com Scenario: Using arachni, look for cross site scripting and verify no issues are found Given "arachni" is installed And the following profile: | name | value | | url | http://example.com | When I launch an "arachni" attack with: """ arachni --checks=xss --scope-directory-depth-limit=1 """ Then the output should contain "0 issues were detected."
Feature is the top level description of what we are testing, Scenario is the actual execution block that gets run. Below that there is
Then which is the plain English approach of Gherkin. If you are interested, you can see lots of examples of how to use Gauntlt in gauntlt/gauntlt-demo inside the examples directory.
Gauntlt is a ruby application and the downside of using it is that sometimes you don’t have ruby installed or you get gem conflicts. If you have used ruby (or python) then you know what I mean… It can be a major hassle. Additionally, installing all the attack tools and maintaining them takes time. This makes dockerizing Gauntlt a no-brainer so you can decrease your effort to get up and running and start recognizing real benefits sooner.
In the past, I used docker like I did virtual machines. In retrospect this was a bit naive, I know. But, at the time it was really convenient to think of docker containers like mini VMs. I have found the real benefit (especially for the Gauntlt use-case) is using containers to take an operating system and treat it like you would treat an application.
My goal is to be able to build the container and then mount my local directory from my host to run my attack files (
*.attack) and return exit status to me.
To get started, here is a working Dockerfile that installs gauntlt and the arachni attack tool (you can also get this and all other code examples at gauntlt/gauntlt-docker):
FROM ubuntu:14.04 MAINTAINER email@example.com # Install Ruby RUN echo "deb http://ppa.launchpad.net/brightbox/ruby-ng/ubuntu trusty main" > /etc/apt/sources.list.d/ruby.list RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C3173AA6 RUN \ apt-get update && \ apt-get install -y build-essential \ ca-certificates \ curl \ wget \ zlib1g-dev \ libxml2-dev \ libxslt1-dev \ ruby2.0 \ ruby2.0-dev && \ rm -rf /var/lib/apt/lists/* # Install Gauntlt RUN gem install gauntlt --no-rdoc --no-ri # Install Attack tools RUN gem install arachni --no-rdoc --no-ri ENTRYPOINT [ "/usr/local/bin/gauntlt" ]
Build that and tag it with
docker build -t gauntlt . or use the
build-gauntlt.sh script in gauntlt/gauntlt-docker. This is a pretty standard Dockerfile with the exception of the last line and the usage of
ENTRYPOINT(1). One of the nice things about using
ENTRYPOINT is that it passes any parameters or arguments into the container. So anything after the containers name
gauntlt gets handled inside the container by
I decided to create a simple binary stub that I could put in
/usr/local/bin so that I could invoke the container wherever. Yes, there is no error handling, and yes since it is doing a volume mount this is certainly a bad idea. But, hey!
Here is simple bash stub that can call the container and pass Gauntlt arguments to it.
#!/usr/bin/env bash # usage: # gauntlt-docker --help # gauntlt-docker ./path/to/attacks --tags @your_tag docker run -t --rm=true -v $(pwd):/working -w /working gauntlt $@
Let’s run our new container using our stub and pass in an attack we want it to run. You can run it without the argument to the
.attack file as Gauntlt searches all subdirectories for anything with that extension, but lets go ahead and be explicit.
$ gauntlt-docker ./examples/xss.attack generates this passing output.
@slow Feature: Look for cross site scripting (xss) using arachni against scanme.nmap.org Scenario: Using arachni, look for cross site scripting and verify no issues are found # ./examples/xss.attack:4 Given "arachni" is installed # gauntlt-1.0.10/lib/gauntlt/attack_adapters/arachni.rb:1 And the following profile: # gauntlt-1.0.10/lib/gauntlt/attack_adapters/gauntlt.rb:9 | name | value | | url | http://scanme.nmap.org | When I launch an "arachni" attack with: # gauntlt-1.0.10/lib/gauntlt/attack_adapters/arachni.rb:5 """ arachni --checks=xss --scope-directory-depth-limit=1 """ Then the output should contain "0 issues were detected." # aruba-0.5.4/lib/aruba/cucumber.rb:131 1 scenario (1 passed) 4 steps (4 passed) 0m1.538s
Now you can take this container and put it into your software build pipeline. I run this in Jenkins and it works great. Once you get this running and have confidence in the testing, you can start adding additional Gauntlt attacks. Ping me if you need some help getting this running or if you have suggestions to make it better at: firstname.lastname@example.org.
If you have installed a ruby app, you know the process of managing dependencies–it works fine until it doesn’t. Using a docker container to keep all the dependencies in one spot makes a ton of sense. Not having to alter your flow and resorting to VMs makes even more sense. I hope that through this example of running Gauntlt in a container you can see how easy it is to abstract an app and run it almost like you would on the command line – keeping its dependencies separate from the code you’re building and other software on the box, but accessible enough to use as if it were just an installed piece of software itself.
1. Thanks to my buddy Marcus Barczak for the docker tip on entrypoint.
Turns out James (@wickett) is too shy to pimp his own stuff properly here on The Agile Admin, so I’ll do it!
As you may know James is one of the core guys behind the open source tool Gauntlt that helps you add security testing to your CI/CD pipeline. He just gave this presentation yesterday at Austin DevOps, and it was originally a workshop at SXSW Interactive, which is certainly the big leagues. It’s got a huge number of slides, but also has a lab where you can download Docker containers with Gauntlt and test apps installed and learn how to use it.
277 pages, 8 labs – set aside some time! Once you’re done you’re doing thorough security testing using a bunch of tools on every code deploy.
James, Karthik, and Ernest did a Webcast on Devops State of the Union 2015 talk for the BrightTalk Cloud Summit. It went well! We had 187 attendees on the live feed. In this blog post we’ll add resources discussed during the talk and we will seed the comments below with all the questions we received during the webcast and answer them here – you’re all welcome to join in the discussion.
The talk was intended to be an overview of DevOps, with a bunch of blurbs on current and developing trends in DevOps – we don’t go super deep into any one of them (this was only 40 minutes long!). If you didn’t understand something, we’ve added resource links (we got some questions like “what is a container” and “what is a 12-factor app,” we didn’t have time to go into that in great detail so check some of the links below for more.
Tomorrow (Tuesday June 9), James, Karthik and I will be doing a DevOps State of the Union webcast live on BrightTALK, at 1 PM Central time. We’ll be taking questions and everything! You can watch it here: DevOps State of the Union 2015.
A hint on topics we might cover:
And more! Come and join us.
TL;DR – performance improvements and two huge announcements, Docker-based EC2 Container Service and cloud-CEP-like AWS Lambda.
I was in a meeting for the first 45 minutes but I hear I didn’t miss much. Happy customer use cases.
The first big theme of this morning’s keynote is “Containers” – often just shorthand for “docker.” I went to a previous event here in town with even large enterprises and government – State of Texas, Microsoft, Dell, Red Hat – all freaking out about Docker. Docker is similar to VMWare or cloud in that it is a new technology that requires new monitoring and management just for it. (Heck, Eric, the CopperEgg founder, is now running a startup around docker container management, StackEngine.)
Next… Leapfrogging PaaS?
And this has been your cloud update! Also see Ben Kepes in Forbes for a similar summary.
The container engine is cool – it’ll certainly remove a lot of instance gerrymandering and instance reservation pain if nothing else. But Lambda is the potential disruptor here. It’s taking the idea of “bring your own algorithm” from MapReduce and saying “hmmm you can probably replace your trivial web app just with this” – it’s halfway between a PaaS and a SaaS, none of the Beanstalk complexity, just “here take this function and run it on stuff when it comes in.” If a library of common lambas becomes available, so much computing work done for trivial purposes becomes obsoleted. Who hasn’t seen a Web service to “upload a file here, then zip it or something, then store it…” OK, no servers needed any more. Very interesting.