Assigning Fault To Human Error Is A Human Error

fitz-1024x683

We all know from DevOps blameless retrospective wisdom that there is no such thing as a single “root cause.”  One of the most common root causes people like to assign blame to is “human error”.  Not to mince words, this is usually political, buck-passing CYA of the highest order.

I just read a great article on the recent U.S. Navy ship collision issues I wanted to pass on.  If you have been keeping up with the news, there has been a rash of Navy ships colliding with other ships causing fatalities. When you go Google it up, you see a whole bunch of “Navy attributes it to human error…”

But now go read this article, Something’s Wrong In The Surface Fleet And We’re Not Talking About It.  It’s written by Capt. Michael Junge, an experienced Naval officer. The TL;DR is that you can say “human error” all you want, fire someone, and call it case closed, but these accidents are a systemic amount of understaffing of Naval surface ships and massive undertraining and maintenance that is a leading indicator of even worse to come should an actual wartime deployment be necessary.

Even in engineering, we are tempted to push the problem down onto the person that made a mistake.  Fully engaging with the system that caused the need for the action that caused the mistake, the lack of validation that makes mistakes possible, and so on is hard thinkin’.  It is threatening when people point out flaws in processes and systems and code you had a hand in.  But the only way to actually improve your situation is to soberly assess what the actual contributors to issues are, and work towards fixing them.

1 Comment

Filed under DevOps

LASCON 2017 Conference Notes

Well, last Thursday and Friday I went to LASCON, our local Austin application security convention! It started back in 2010; here’s the videos from previous years (the 2017 talks were all recorded and should show up there sometime soon.  Some years I get a lot out of LASCON and some I don’t, this one was a good one and I took lots and lots of notes!  Here they are in mildly-edited format for your edification.  Here’s the full schedule, obviously I could only go to a subset of all the great content myself.  They pack in about 500 people to the Norris Conference Center in Austin.

Day 1 Keynote

The opening keynote was Chris Nickerson, CEO of LARES, on pen testing inspired thoughts.  Things I took away from his talk:

  • We need more mentorships/internships to get the skills we need, assuming someone else is going to prep them for us (school?) is risible
  • Automate and simplify to scale and enable lower skill folks to do the job – if you need all security geniuses to do anything that’s your fault
  • There’s a lack of non made up measurements – most of the threat severities etc. are in the end pure judgement calls only loosely based on objective measures
  • Testing – how do we know it’s working?
  • How do all the tools fit together? Only ops knows… 2017-10-26 09.43.34.jpg
  • Use an attack inventory and continually test your systems
  • Red team automation plus blue team analytics gives you telemetry
  • Awareness of ego:2017-10-26 09.49.18.jpg

Security for DevOps

2017-10-26 10.19.27

Then the first track talk I went to was on Security for DevOps, by Shannon Lietz, DevSecOps Leader at Intuit. She’s a leader in this space and I’ve seen her before at many DevOps conferences.

Interesting items from the talk:

  • Give security defects to your devs, but characterize adversary interest so they can prioritize.
  • Reduce waste in providing info to devs.
  • 70-80% of bad guys return in 7 days – but 20% wait 30d till your logs roll

She likes to use the killchain metaphor for intrusion and the MITRE severity definitions.2017-10-26 10.24.58

But convert those into “letter grades” for normal people to understand!  Learn development-ese to communicate with devs, don’t make them learn your lingo.2017-10-26 10.36.15
Read the Google Beyondcorp white papers for newfangled security model:
1. zoning and containment
2. Asset management
3. Authentication/authorization
4. Encryption

Vendors please get to one tool per phase, it’s just too much.

2017-10-26 10.48.52.jpg
Other things to read up on…

Startup Security: Making Everyone Happy

2017-10-26 11.14.29By Mike McCabe and Brian Henderson of Stratum Security (stratumsecurity.com, github.com/stratumsecurity), this was a great talk that reminded me of Paul Hammond’s seminal Infrastructure for Startups talk from Velocity. So you are getting started and don’t have a lot of spare time or money – what is highest leverage to ensure product security?

They are building security SaaS products (sold one off already, now making XFIL) and doing security consulting. If we get hacked no one wants our product.

The usual startup challenges – small group of devs, short timelines, new tech, AWS, secrets.

Solutions:

  • Build security in and automate it
  • Make use of available tools, linters, SCA tools, fuzzing
  • Continuous testing
  • AWS hardening
  • Alerting
  • Not covering host security, office security, incident response here
    2017-10-26 11.24.12

They use AWS, codeship, docker (benefits – dev like in prod, run tools local, test local). JavaScript, golang, no more rust (too bleeding edge). Lack of security tooling for the new stuff.

Need to not slow down CI, so they want tooling that will advise and not block the build. The highest leverage areas are:

  • Linting – better than nothing. ESLint with detect-unsafe-regex and detect-child-process. Breaks build. High false positives, have to tweak your rules. Want a better FOSS tool.
  • Fuzzing – gofuzz based on AFL fuzz, sends random data at function, use on custom network protocols
  • Source code analysis – HP Gas
  • Automated dynamic testing – Burp/ZIP
  • Dependency checking. Dependencies should be somewhat researched – stats, sec issues (open/closed and how their process works)
  • Pull requests – let people learn from each other

Continuous integration – they use codeship pro and docker
Infrastructure is easy to own – many third party items, many services to secure

AWS Tips:

  • Separate environments into AWS accounts
  • Don’t use root creds ever
  • Alert on root access and failed logins with cloudwatch. [Ed. Or AlienVault!]
  • All users should use MFA
  • Rigorous password policy
  • Use groups and roles (not direct policy assignment to user)
  • Leverage policy conditions to limit console access to a single IP/range so you know you’re coming in via VPN
  • Bastion host – alert on access in Slack
  • Duo on SSH via PAM plugin
  • Must be on VPN
  • Use plenty of security groups
  • AWS alering on failed logins, root account usage, send to slack

See also Ken Johnson’s AWS Survival Guide

Logging – centralize logs, splunk/aws splunk plugin (send both direct and to Cloudwatch for redundancy), use AWS splunk plugin.

Building the infrastructure – use a curated base image, organize security groups, infra as code, manage secrets (with IAM when you can). Base image using packer. Strip down and then add splunk, cloudwatch, ossec, duo, etc. and public keys. All custom images build off base.

Security groups – consistent naming. Don’t forget to config the default sec group even if you don’t intend to use it.

Wish we had used Terraform or some other infrastructure as code setup.

Managing secrets – don’t put them in plain test in github, docker, ami, s3. Put them into KMS, Lambda, parameter store, vault. They do lambda + KMS + ECS. The Lambda pulls encrypted secrets out of s3, pushes out container tasks to ecs with secrets. See also “The Right Way To Manage Secrets With AWS” from the Segment blog about using the new Parameter Store for that.2017-10-26 11.42.38
Next steps:

  • more alerting esp. from the apps (failed logins, priv escalation)
  • terraform
  • custom sca (static analysis)
  • automate and scale fuzzing maybe with spot instances

Security is hard but doesn’t have to be expensive – use what’s available, start from least privilege, iterate and review!

Serverless Security

2017-10-26 13.54.30

By fellow Agile Admin, James Wickett of Signal Sciences.  Part one is introducing serverless and why it’s good, and then it segues to securing serverless apps halfway in.

Serverless enables functions as a service with less messing with infrastructure.

What is serverless? Adrian Cockroft – “if your PaaS can start instances in 20ms that run for half a second, it’s serverless.” AWS Lambda start time is 343 ms to start and 84 ms on subsequent hits, not quite the 20ms Cockroft touts but eh. Also read https://martinfowler.com/articles/serverless.html and then stop arguing about the name for God’s sake.  What’s wrong with you people.  [James is too polite to come out and say that last part but I’m not.]

Not good for large local disk space, long running jobs, big IO, super super latency sensitive. Serverless frameworks include serverless, apex, go sparta, kappa. A framework really helps. You get an elastic, fast API running at very low cost. But IAM is complicated.

So how to keep it secure?

  • Externalize stuff out of the app/infra levels – do TLS in API gateway not the app, routing in API gateway not the app.
  • There’s stack element proliferation – tends to be “lambda+s3+kinesis+auth0+s3+…”
  • Good talk on bad IAM roles – “Gone in 60 seconds: Intrusion and Exfiltration in Serverless Architectures” – https://www.youtube.com/watch?v=YZ058hmLuv0
  • good security pipeline hygeine
  • security testing in CI w/gauntlt
  • DoS challenges including attack detection…
  • github/wickett/lambhack is a vulnerable lambda+api gateway stack like webgoat. you can use it to poke around with command execution in lambda… including making a temp file that persists across invocations
  • need to monitor longer run times, higher error rate occurrences, data ingestion (size), log actions of lambdas
  • For defense: vandium (sqli wrapper), content security policies

And then I was drafted to be in the speed debates!  Less said about that the better, but I got some free gin out of it.

Architecting for Security in the Cloud
2017-10-27 10.18.40

By Josh Sokol, Security Spanker for National Instruments! He did a great job at explaining the basics. I didn’t write it all down because as an 3l33t Cloud Guru a lot wasn’t new to me but it was very instructive in reminding me to go back to super basics when talking to people.  “Did you know you can use ssh with a public/private key and not just a password?” I had forgotten people don’t know that, but people don’t know that and it’s super important to teach those simple things!

  • Code in private GitHub repo
  • Automation tool to check updates and deploy
  • Use a bastion to ssh in
  • Good db passwords
  • Wrap everything in security groups
  • Use vpcs
  • Understand your attack surfaces – console, github, public ports
  • Analyze attack vectors from these (plus insiders)
  • Background checks for employees
  • Use IAM, MFA, password policies
  • Audit changes
  • The apps are the big one
  • Https, properly configured
  • Use an IPS/WAF
  • Keys not just passwords for SSH
  • Encrypt data before storing in db

Digital Security For Nonprofits

2017-10-27 10.58.21

2017-10-27 11.00.23

Dr. Kelley Misata was an MBA in marketing and then got cyber stalked.  This led to her getting an InfoSec Ph.D from Spaf at Purdue! Was communications director for Tor, now runs the org that manages Suricata.

Her thesis was on the gap of security in nonprofits, esp. violence victims, human trafficking. And in this talk, she shares her findings.

Non-profits are being targeted for same reasons as for-profits as well as ideology, with int’l attackers. They take money and cards and everything like other companies.
63% of nonprofits suffered a data breach in a 2016 self report survey.  Enterprises vet the heck out of their suppliers… But hand over data to nonprofits that may not have much infosec at all.

ISO 27000, Cobit 5… normal people don’t understand that crap. NIST guidance is more consumable – “watered down” to the infosec elite but maps back to the more complex guidelines.

She sent out surveys to 500 nonprofits expecting the normal rate of return but got 222 replies back… That’s an extremely high response rate indicating high level of interest.
Nonprofits tend to have folks with fewer tech skills, and they more urgent needs than cyber security like “this person needs a bed tonight.”  They also don’t speak techie language – when she sent out a followup a common question was “What does “inventory” mean?”

90% of nonprofits use Facebook and 53% use Twitter.  They tend to have old systems. Nonprofit environments are different because what they do is based on trust. They get physical security but don’t know tech.

2017-10-27 11.21.16.jpgThey are not sure where to go for help, and don’t have much budget. Many just use PayPal, not a more general secure platform, for funds collection. And many outsource – “If we hand it off to someone it must be secure!”

The scary but true message for nonprofits is that it’s not if but when you will have a breach. Have a plan. Cybersecurity insurance passes the buck.

You can’t be effective if you can’t message effectively to your audience. She uses “tinkerer” not hacker for white hats, because you can complain all you want about “hacker not cracker blah blah” but sorry, Hollywood forms people’s views, and normal people don’t want a “hacker” touching their stuff period.

Even PGP encrypting emails, which is very high value for most nonprofits, is ridiculously complicated for norms.

What to do to improve security of nonprofits? Use an assessment tool in an engaging way. Help them prioritize.
She is starting a nonprofit, Sightline Security for this purpose. Check it out! This was a great talk and inspires me to keep working to bring security to everyone not just the elite/rich – we’re not really safe until all the services we use are secure.

2017-10-27 11.42.09.jpg

Malware Clustering
2017-10-27 13.03.01

By Srini (Srivathsan Srinivasagopalan), a data scientist from my team at AlienVault!

Clustering malware into groups helps you characterize how families of it work, both in general and as they develop over time.

To cluster, you need to know what behavior you want to cluster on, it’s too computationally challenging to tell the computers “You know… group this stuff similarly.”

You make signatures to match samples on that behavior. Analyzed malware (like by cuckoo) generally gives you static and dynamic sections of behavior you can use as inputs. There’s various approaches, which he sums up.  If you’re not into math you should probably stop reading here so as to not hurt yourself.

To hash using shingling – concatenate a token sequence and hash them.2017-10-27 13.12.07.jpg
Jaccard similarity is computationally challenging.
Min-hashing2017-10-27 13.28.39
Locality sensitive hash based clustering

Hybrid approach: corpus vectorization

2017-10-27 13.37.16
Next…Opscode clustering! Not covered here.

TL;DR, there’s a lot of data to be scienced around security data, and it takes time and experimentation to find algorithms that are useful.

Cloud Ops Master Class

2017-10-27 14.00.48By @mosburn and @nathanwallace
Trying to manage 80 teams and 20k instances in 1 account – eek!  Limits even AWS didn’t know about.
They split accounts, went to bakery model. Workload isolation.
They wrote tooling to verify versions across accounts. It sucked.
Ride the rockets – leverage the speed of cloud services.
Change how the team works to scale – teach, don’t do to avoid bottlenecking. App team self serves. Cloud team teaches.

2017-10-27 14.29.04.jpgPolicies: Simple rules. Must vs should. Always exceptions.
The option requirement must be value in scope.
Learn by doing. Guardrails – detect and correct.
2017-10-27 14.29.10Change control boards are evil – use policy not approval.
Sharing is the devil.
Abstracting removes value – use tools natively.

  • Patterns at scale
  • Common language and models
  • Automate and repeat patterns
  • Avoid custom central services
  • Accelerate don’t constrain
  • Slice up example repos
  • Visibility
  • Audit trail
  • Git style diff of infra changes
  • Automate extremely – tickets and l1-2 go away
  • All ops automated, all alerts go to apps so things get fixed fast

He’s created Turbot to do software defined ops – https://turbot.com/features/

  • Cross account visibility
  • Make a thing in the console… then it applies all the policies. Use native tools, don’t wrap.
  • Use resource groups for rolling out policies
  • Keep execution mostly out of the loop

2017-10-27 14.22.32.jpg

And that was my LASCON 2017! Always a good show, and it’s clear that the DevOps mentality is now the cutting edge in security.

Leave a comment

Filed under Conferences, Security

Java Docker Pull Travails

Just had a problem that I thought I’d document the solution to for the world…

In our build pipeline at work, we use maven and the fabric8 docker-maven-plugin to manage our builds.  We love it, developers can just “mvn install” locally and then the Atlassian Bamboo build system just “mvn deploy”s in the exact same way.

Well, so we had some builds that suddenly weren’t able to pull the base images specified in our Dockerfiles down from Dockerhub, breaking the build with 500 error messages like:

[ERROR] DOCKER> Unable to pull 'library/debian:sid' from registry 'docker.io' : received unexpected HTTP status: 500 Server Error (Internal Server Error: 500) [received unexpected HTTP status: 500 Server Error (Internal Server Error: 500)]

But it worked fine on our local box. And it could pull our custom images from Artifactory fine. What’s the problem here?  Bamboo?  The plugin? Well, some helpful community folks helped home in on it, it turns out that for some versions of Java 1.8, 8u131 and prior at least going back to 112, where there’s some problem (TLS? Root certs? Not really sure) that messes up when pulling a docker.io container from inside Java during our docker build step.  My team’s microservices aren’t Java based so the Java version doesn’t come up much – but of course maven uses Java.

Upgrading the JDK version to 8u144 made the problem go away.  We actually have an up to date curated Java version we use in Bamboo for our Java builds, but folks doing Python builds were just using the default “JDK 1.8” that Atlassian is putting on their Bamboo build agent AMI, which is of course old and suffers from this issue.

 

Leave a comment

Filed under DevOps

Long live ChatOps, RIP AOL IM!

I grew up in Muscat, Oman, and it was an exciting time when we got Internet at home in 1996. By 1998, all of my friends who had Internet at home were first on ICQ and then on AOL IM. AOL IM was huge when I went to college in the early 2000’s and was the primary way to connect friends together to chat. Back then, it was rare to have chat rooms, and the rooms that existed were usually long-running things set up to talk about general topics.

The first time I saw value in a chat room in a professional setting was when I got invited to a Basecamp “deploy room” by fellow Agile Admin Peco (or was it Ernest?) at NI when our quarterly release cycle was going super poorly, and all of us (100 other people) were waiting around at hour #34 trying to figure out why some random enterprise application was holding up the rest of the release process. Post invitation to the room, I was able to look at the past messages between the ops team about application failures, and then realized pretty quickly that our databases weren’t actually responding like they should. It took all of 10 minutes to ask someone on the ops side with credentials to run a database query, and figure out that the db creds were all wrong. 2 hours later, the release was all done…

That moment made me realize that 1×1 chats were great, but having a persistent chat rooms with teams of people added value to an organization.

Recently, a colleague asked me a simple question that made me reflect. He asked, “What’s the big deal about Slack?”. At work, there’s been a big push to move towards Slack, when we’ve had 1×1 chat forever. Here are my 5 most compelling reasons for doing so:

1) Collaboration++: 15 years ago, software was a simpler, and there was no cloud/microservices. You’d have 1 large binary to deploy for a platform, and typically have a few folks who understood the overall workings of platform. Today, with microservices, you require a bunch of applications to deploy, and each of these have specific owners who understand specifics. Thus, you’re going to have to have conversations with multiple folks to figure out any issues. Having this in a room setting versus a 1×1 setting gets you to a resolution faster.

2) Chat metadata: Chat is less about words, and more about conversations that include images, links, slash commands, workflows etc. Chatops tools make pasting these much easier than before, and looking at formatted code in Slack is so much easier to read than looking at the same in pidgin.

3) Chat History: Chat apps now give you history – even from when you were not online or in the chat room. This is valuable from the perspective that you can see everything from when you weren’t around, and don’t have to ask someone to keep repeating the problem over and over again. You can just scroll up, read the context, and be ready to help if you can. This is my one knock against IRC (or at least the implementation of IRC at a company I worked at); it was nice to have everyone in a spot, but it only worked when we were VPN’ed in, and had no history.

4) Pipelining with chatbots: Continuous Integration/Delivery is all the rage these days! Having a chat system that allows for your devops systems to push data is a primary requirement in order to build a pipeline of this sort. Responses to broken builds, tests, alerts are quicker when the data associated with these are transmitted to a chatroom that you’re looking at, than having to look at Jenkins all the time. Chatbots are invaluable in this scenario, and help you with information flow.

5) The new normal: A new generation of engineers already do this. It’s already part of the culture for the next generation of engineers who work on open source (for example, kubernetes slack) and there’s even chatter about slack at Universities now. The world is evolving towards broader conversation, and not having chatops tools will hurt your company in terms of hiring and retention.

 

Agree/Disagree, or have a different perspective? Let me know by commenting below!

4 Comments

Filed under Agile, DevOps

Docker 101

Working at Stackengine, and now at Oracle, I’ve been working in the Docker ecosystem for the last 5 years!

While containerization has taken the IT and devops world by storm, a lot of larger enterprises might still be on the outside looking in. If you find yourself in that boat, you’re in luck!

Here’s a quick video on getting you running your very first Docker container on your Mac in under 5 minutes.

Also, I had the pleasure of traveling back to my childhood hometown of Bengaluru and presenting a workshop at Code Conf this year. I’ll create a separate post about my travels, but I got to present a workshop lab that is an Introduction to Containers. This lab is a perfect follow on to the video above, and will help you get started on your Docker journey! Let me know if you have questions.

If you’re more of a product manager, or just looking for why you’d want to use Docker, and understand its usecases, you can take a look at this presentation I had published on Why to docker? as shown below.

Questions, comments, or concerns? Hit us up by leaving a comment below…

2 Comments

Filed under DevOps

DevOps Foundations: Lean and Agile

Well you’re in for a treat – we’re getting all of the Agile Admins in on making DevOps courses, and Karthik and I did a course that’s just released today – DevOps Foundations: Lean and Agile.

It’s available both on LinkedIn Learning:
https://www.linkedin.com/learning/devops-foundations-lean-and-agile

and on Lynda.com:
https://www.lynda.com/JIRA-tutorials/DevOps-Foundations-Lean-Agile/622078-2.html

After James and I did DevOps Foundations, the “101” course, we were focused on building out courses for the three major practice areas of DevOps – Continuous Deployment, Infrastructure Automation, and Site Reliability Engineering (in progress now). But our lynda.com content manager said there was interest in us also expanding on the use of Agile and Lean especially as it relates to DevOps.

Karthik is our agile admin Agile expert; he’s presented at several Agile conferences and the like, so he and I decided to take it on.  But how would we bring a DevOps specific take to it?  We started outlining a course and realized it could turn into a giant boring encyclopedia of every Lean and Agile term ever. Most of what we have to add isn’t reading definitions, it’s sharing our experiences actually doing this (my Scrum for Operations series on this blog is perennially popular).

So we decided to take a tip from both Eliahu Goldratt’s The Goal and Gene Kim’s The Phoenix Project by framing the course as a fictional story!  By stitching a narrative together of a Lean, Agile, DevOps transformation of a hypothetical company out of our real world stories from a variety of implementations, we figured we could explain the concepts in context and make them more interesting.  Let us know what you think!

Lynda Course Description:

By applying lean and agile principles, engineering teams can deliver better systems and better business outcomes—both of which are crucial to the success of DevOps. In this course, instructors Ernest Mueller and Karthik Gaekwad discuss the theories, techniques, and benefits of agile and lean. Learn how they can be applied to operations teams to create a more effective flow from development into operations and accelerate your path of “concept to cash.” In addition to key concepts, you can hear in-the-trenches examples of implementing lean and agile in real-world software organizations.

Topics Include:
What is agile?
What is lean?
Measuring success
Learning and adapting
Building a culture of metrics
Continuous learning
Advanced concepts

Duration:
1h 26m

Leave a comment

Filed under Agile

HashiConf 2017

As part of our embarrassment of conference riches here in Austin this year, I just went to HashiConf 2017 last week (Sept. 19-20).  HashiConf is the company conference for HashiCorp, the guiding hand behind a whole set of cool open source projects used by many newfangled technorati. We use many of these at AlienVault and so I went to see what’s hot and new!  If you’re not familiar, here’s the open source tools Hashi runs:

  • Vagrant – Run multi-server development environments on your laptop. We’ve mostly moved on to using Docker Compose for this but if you’re not using docker yet it’s mandatory!
  • Packer – A utility for building VM images, including AWS AMIs, Azure, etc.  At AlienVault we use this extensively to bake images for virtual appliances.
  • Terraform – An infrastructure as code tool, like AWS CloudFormation but cross-cloud. We use this on one of our products; I personally don’t have a lot of experience with it though.
  • Consul – A distributed configuration store and service discovery tool.  I did a consul setup for one of our products.
  • Vault – a secret store – it stores credentials encrypted but can also dynamically provision them. I am very interested in us starting to use this.
  • Nomad – a cluster scheduler – does more than Amazon ECS but less than Kubernetes.

I wanted to go to the Vault training but it was sold out by the time I managed to poke the sullen beast sufficiently to get registered.

The good news is that all of the sessions will allegedly have video posted publicly at some point!  So you won’t have to rely on my notes below, which is good.  Here’s the conference schedule.  All the session videos are now available as a YouTube playlist at:

If I were some kind of paid blogger or this was some ad-driven site I’d say “We’re done with part one, come back next post for more!”  But I’m not, so read on to get all my debrief from the conference. If it’s too long, you’re too old!!!

Day One

The Day 1 keynote was packed with info.  It started with Mitchell Hashimoto (the Hashicorp founder, as you might suspect from the name) talking about the mad growth they’ve seen – in 5 years since founding they’re up to 130 employees, conducted 150 releases across their 6 products last year, and had 22 million downloads of these tools in the last year (1.5M in the last month).  Makes sense to me; most people I know use at least one Hashi product even if it’s just vagrant or packer.

They have Terraform, Nomad, Vault, and Consul Enterprise (hosted) offerings.

Let me take an aside to say – while it’s admirable they didn’t spend most of the conference pushing their paid products (I’m looking at you Dockercon), there was so little information on them that it was confusing.  Back a year ago when I started in on a Consul implementation I poked around their site trying to figure out their hosted offering (called Atlas at the time) and was basically thwarted. And it’s not much better now.  What do these give you?  What do they cost?  I had a *lot* of hallway conversations with people who similarly had tried to look into them and been rebuffed.  “It’s like buying a Ferrari,” said one person who I’ll leave anonymous. “There’s no list price, calling them just starts some conversation about whether you should really be a customer of theirs or not.”  Sucks, I like using hosted services rather than rolling my own if it at all makes financial sense, but who has time for that?

Anyway, so they started working their way down the products to make announcements. They’re now marketing these as “The Hashistack!”

Terraform

It’s going gangbusters and they added lots of features including a UI console, imports from external and local variables, stability, remote backends, and more.  I can testify to this, before the remote backends and you had to keep state in a file – that sucked.  (For those of you familiar with AWS and not Terraform, let me explain – CloudFormation has two parts to it – the actual templates and then the storage of system state.  The AWS fabric itself keeps the state for you and you get it via API.  Since Terraform is cloud agnostic the tool needs to store state itself; and instead of starting with a database or whatnot they just started with a file and have expanded from there.  Now you can do S3, consul, various other stuff.)

Their big announcement is the Terraform Module Registry.  I like to call it the TerrorHub or even the TerrorDome (with apologies to Public Enemy). Basically, you know, where people share stuff in the community like dockerhub.

I find this super interesting.  Done right, repos like this are a huge force multiplier for modern tools.  On the Open Threat Exchange, I’ve been happy just to use AWS CloudFormation.  We’re not interested in cross-cloud at all, and we use changesets and exports and other modern CF things, so going to Terraform wouldn’t really get us much -maybe some extra modularity, but we do continuously deployed microservices and things are factored out where we don’t need a giant ass template anyway. But, AWS does a crappy job of providing and enabling the community to provide decent CloudFormation templates. They try – but they’re hard to find in the giant rat’s nest of AWS docs, they are generally somewhat problematic (e.g. the quickstart Mongo CF they provide has a node hardcoded to be the primary and the others to be secondaries – but mongo works by consensus elections of the primary man!!!). Now, it’s possibly to bork this up, the Chef community had fragmentation around this till they finally put together the Chef Supermarket in 2015; we use Puppet at AV currently and at the time we selected it with the deciding factor being easier access to high quality modules for re-use.

They had Corey Sanders from Microsoft show off their Azure console that now has an embedded “Azure Cloud Shell” (just a container they expose via browser) and now it’s coming pre-loaded with Terraform!  Azure’s been Johnny on the spot with docker and Linux and Hashi stuff over the last few years.

Terraform Enterprise apparently has a lovely visual viewer for workspaces and is API everything and will have general signups open by EOY.

Vault

In an automated environment, “don’t keep your passwords in source” is really kind of a laughable lie.  I mean, you can not hardcode them and insert them from something else, something properly also kept in source control…  Hence Vault.  Vault is an internal project they open sourced 2.5 years ago.  Network configurations have become increasingly complex – hey did you know network perimeters aren’t secure any more?
Secret management, encryption as a service, privileged access management are all critical needs, Vault has added on bunches of functionality to them – and its secure plugins open it up to you!

Dan McTeer from Adobe came out to describe how his team makes self service security solutions as an internal platform (using Vault amongst other things) so the many, many other Adobe ops teams don’t have to waste time reinventing the wheel.

Vault 0.8.3 has tight and easy kubernetes integration. Kubernetes has a secret store but… Not a good one. (And before any k8s people get their shorts in a wad, the “take it all or leave it all” attitude from some k8s people is why some folks are sticking with more composable solutions. Don’t say your components are pluggable out one side of your mouth and then give people flak for doing it out the other.)

Consul

Consul can do so many things it’s hard to talk about sometimes. “Service discovery and distributed config store?” Hard for novices to get their heads around. There’s a session I’ll describe later that does a great job of demystifying it.  For now, note that they added dead server cleanup and other new functionality.

Since Consul is built on very academic stuff like SWIM and RAFT, they’ve started a research arm and have published their first paper on Lifeguard: SWIM-ming with Situational Awareness which reduces gossip protocol false positives by 98%.

And, they have just gone 1.0 with Consul!  So it’s battle tested, mother approved.

Nomad

Nomad’s their batch and service scheduler.  I have to admit, I went into the conference with a “bah who cares” attitude about Nomad, but afterwards I think it has its points.

Previous reviews I read compare it unfavorably against Kubernetes and Swarm. To be fair they are deliberately smaller and more composable, which is why we are still on ECS and haven’t taken the big ol’ step to Kubernetes ourselves yet. (Kubernetes: The OpenStack of 2017).

Anyway, new stuff.

  • Everything’s API and CLI – so now they have a UI integrated into the binary, like Consul!
  • Role based ACL policies driven by ACL tokens. “IAM for Nomad.”
  • Citadel passed 1M containers under Nomad
  • Nomad Enterprise has native namespacing for larger shared environments.

The Big Secret Announcement

Mitchell gives us a big ol’ windup. In terms of total automation we went from VMs and config management to cloud and IoC to containers and microservices to schedulers… What’s next?
How do you enforce rules? Forbid changing config outside work hours (consul)… Ensure all services have health checks (nomad)… Ensure all TLS certs are 2048 bits (vault)… Ensure all AWS instances are tagged (terraform)…
Announcing Sentinel – a policy as code engine suitable for use in a continuous delivery pipeline.

  • Define and version your compliance rules, test and automate them.
  • Language: Easy to learn and use, mostly one liners, easy logic.
  • Can do active enforcement (block) and passive (check).
  • Levels of fail – advisory, soft, hard.
  • Workflow (“simulator”) – can mock and test locally.
  • Plugins as imports.
  • Terraform Enterprise uses this, so like you can make rules before the plan and apply to validate changes
  • Consul Enterprise, use during KV modifies or service registrations
  • Vault Enterprise, role and endpoint governing policies
  • Nomad Enterprise, run on job create/update introspecting on their details (e.g. artifacts only come from this repo)

It wasn’t immediately clear if this only worked with the Enterprise offerings or not, but it appears (after talking to other confused folks over the 2 days) that the answer is yes.  Well, that sucks, as it limits adoption to the 1% who care of the 1% who have managed to get any Enterprise offerings.   Oh well.

I’ve been asked if this is like Chef’s InSpec.  Kinda, but InSpec is for compliance rules “inside the box” and Sentinel is for rules about the system, just like Chef is to Cloudformation. So IMO they’d be super complementary.

Finally, Dave McJannet, the CEO, came on to talk about the Hashicorp Partner Network for resellers and integrators.

Guiding principles:
  • Workflows not technologies
  • Automation through codification
  • Open and extensible

All right, that was the keynote!  Don’t worry, I didn’t take that many notes from other things.

Session One – Sentinel

Having just heard about Sentinel, and not being clear yet I couldn’t use it if not on all Enterprise offerings, I skedaddled over to a session on it. Again, these will eventually be online so I’m not trying to duplicate whole talks and demos in text, this is to give you some flavor and opinions.

Defining, communicating, implementing, and auditing policy gets complex with scale. And it then becomes a huge source of friction on real operational work, despite attempts to document it. So just like we’ve fixed similar problems, let’s do it as code.

Then he says all the things Mitchell did in the keynote, but slower.

  • Example policy: Terraform can’t execute when consul healthchecks are failing
  • Yay golang!
  • Made their own simplistic DSL.
  • We’re 15 minutes into a 40 minute slot. I kinda want to switch to the terraform session, but I know from the attendance here that it’s standing room only over there.

import “time” <- you can import libs it knows how to get info from (sockaddr, tfplan, job)
main = rule { time.pst.hour is not 3 } <- functioney syntax, rules are the main thing, plain english “or”s and stuff
if an element returns undefined that’s ok
variables, dynamic typing

types of policy – advisory (just tells ya), soft (can be overridden), hard (no screw you)

writing and testing policies, you can use their simulator, e.g. “sentinel test”
Demo!
sentinel apply <file>
make test folder (test/<policy>/<test>.json
config, mock, say what rule you want to test, then it’ll run all the tests with “sentinel test <policy>”

It integrates with nomad enterprise, and he shows how you accidentally change a deployment count too low and it fails the rule and doesn’t happen.

Session Two – Vault

Liz Rice of Aqua (@aquasecteam) has a product written around Vault.

How to handle secret attributes and lifecycle in docker? Passing secrets into containers goes from

Bad:

  • in source code
  • in image/dockerfile

To less bad but still bad:

  • Env vars – can exec in and see it, can docker inspect it and see it, can cat /proc/<pid>/environ, leak into logs…
  • mount volume with directory with secrets – tempfs is in memory. can still get it via docker exec and /proc.

Docker orchestrator support for secrets:

  • Nomad + Vault – secrets passed as files, tasks get tokens to retrieve values
  • Docker – swarm has service support but not for pocs. rotation needs restart though, and secrets go into the raft log encrypted -but the key is right there unless you lock your swarm.
  • kubernetes – in a pod yaml, namespaced, as vol or env var, can turn on RBAC with —authorization-mode RBAC. Stored in etcd and you have to make sure it’s encrypted.

A docker -pluggable secret backend is in progress.
In kubernetes – use kubernetes-vault
For others – Aqua Security secrets management!
With aqua you can’t get the secret via inspect or /proc, and it has audit logs n stuff.
tiny.cc/secrets

Session Three – Enterprise Security with Hashistack – Palantir

Ooo this one was good.  Pic!

2017-09-19 14.36.47

The Hashistack

So you need to:

  • Secure your infrastructure
  • Secure your configuration
  • Make the secure thing the easy thing to do
    it gets pretty complicated fast.

There are 6 pillars of security in your infrastructure:

  1. encryption
  2. access segmentation
  3. patching
  4. centralized logging
  5. mfa
  6. defensive backups

He will skip talking about #1 and #2 because they are obvious and boring.

Patching

MalwareBytes shows: It now takes 4 days on average to weaponize new exploits. Patching needs to be in phases, alert on failures, roll back, run daily
They use immutable AMIs with packer/terraform. They burn a common AMI with vault/consul/nomad/filebeat and just turn on what’s needed on a given server. Versions are kept in a variable.json file.
Their terraform lays out 3 ASGs for nomad/consul, vault, and then nomad workers.

How to bounce a hashistack:

  • delete node/add node for consul/nomad
  • like that with stepdown for vault
  • nomad workers – add new ones, drain old ones
  • Ain’t nobody got time for that.

They wrote “bouncer” to do that easily and automatically, open sourced at github/palantir/bouncer. So you have it rebuild and roll in new versions daily.  Ta da!

Logs

You need them to do incident response. You want <5 min latency, many formats, long retention, source identified – and opt-out not opt-in (in other words you get all the damn logs and not just the 3 you know about).

They started with workers shipping logs to a SIEM. Then they shipped to logstash which sends to multiple locations. They have a custom rsyslog template that puts everything into json, because logging in json is always better.  [If you don’t believe that you will be tied to a chair and Charity Majors will bludgeon you with a whiskey bottle until you are reeducated. -Ed.]

Then they go journald to rsyslog for nomad etc. Other ones, filebeats watches. Telemetry should monitor high risk files and get insights into running procs, kernel versions, etc.

MFA

MFA on everything?  Sure! But… It’s bullshit. The process of getting logged in and the number of logins and MFAs you have to do to go from “sitting down at your desk” to “logged into the necessary box” is ridiculous.

2017-09-19 14.50.20

MFA is Bullshit

Need to simplify finding hosts, ssh’ing.
They wrote a go wrapper around sesh, aws, and vault commands called vault-ssh-helper.  They also use a duo pam module and a yubikey (way faster than phone mfa).
from laptop, get vault token which then are used to look up aws creds, then used to look up ssh, one MFA later you’re in.  ssh/mfa becomes the easy path.

Defensive Backups

Use a separate backup account with you not the admin. back up storage by default, not opt in.  You need those backups available for testing too.
So, RDS. We have a job that makes a snapshot, shared with a second account. A lambda in the other account reads it, makes a copy, and shares it back.
It’s run from a nomad batch job, assumes all RDS and S3 needs to be backed up.

Securing nomad jobs and vault policies

First, gate on checkin, and second, stop bad code.
gpg sign commits – sign with yubikey
or, duobot (https://github.com/palantir/duo-bot) calls the duo api and sends you a push to your mobile.
They built some bots to automate all this, approver, duo-bot (mfa absed deploy) and bulldozer (auto-merge if status green) – (github)
Unit tests for jobs – naming schemes, folder structure, health checks
CI converges to what’s in the repo hourly

Experimenting in prod with immutable infrastructure – Tim Perrett

Slides here (slideshare login required): https://www.slideshare.net/timperrett/online-experimentation-with-immutable-infrastructure

Converged infrastructure – you know, scheduling workloads.
New table stakes – fast, observable, self service, auto cleanup.
Testing in prod is a reality.  Here, he avoids using any of the common “test in prod” memes, for which I salute him.
Emergent behaviors of large systems – conways law and hyrum’s law – and  microservice complexity means that local testing is still a fable. Scaling peaks, human error can’t be locally simulated.

On their site, user calls gets segmented at edge (opentrace, zipkin), routed to the right backend, and publish its telemetry for analysis. A front end envoy proxy to split to proxies that go to edges

Feature flags are inferior!
Data plane for Routing – envoy, linkerd, nginx+. Integration with the control plane is important.

Options: Envoy in container? Sidecar as proxy? Host based envoy?

Control Plane itsisomething or…  I lost some of it here.

Instrument with segment identifiers, make apps experiment aware, interesting data is usually wrong.

That was all the notes I took from Day 1.  I also spent time talking with people I knew, some vendors (Amazon, Hashi, Datadog, a new APM vendor called Instana…)

Day Two

Well, the scheduled keynoter dropped, so we got a quick sub:

The Ecological Impact Of Compute – Seth Vargo

Wholesale colos are hideously less energy efficient – once adjusted by volume, they use 49% vs cloud’s 5% of the energy picture.

2017-09-20 09.44.26
Due to overprovisioning, invalid IT procurement focus (bulk), unused “zombie” servers, lack of standard utilization metric
IT managers can’t identify owners for 15-30% of the servers but are reluctant to decomm them!
Electric costs are paid by someone else.
Priority of efficiency low compared to deploy, avail, security, reliability
Schedulers help with this.  And use the cloud you poor benighted momos.  (I added that last part)

Keynote by Kelsey Hightower! On Hashinetes!

Kelsey is obviously huge into Kubernetes but he loves the Hashistack too.  And sometimes he uses both nomad and kubernetes at the same time?  CRAZY EH???

He presents some info from circleci on why they use kubernetes and nomad both (kubernetes good for containers, nomad does non-containers)

Demo time!

  • dynamic credential provisioning via vault – vault makes a mysql username, the app uses it, then vault deletes it!
  • he tells his android phone “create nomad” and it deploys it in k8s
  • make your nomad worker pool in the cloud, not on K8S

Elasticsearch as a Service Using the Hashistack by Yorick Gersie

EBay classifieds runs a buttload of single tenant Elasicsearches.  So do we, which made me very interested in his talk.

They spin it up with terraform, nomad, etc.
challenges
– discover master nodes
– direct traffic to right cluster
– port conflicts
– data persistency
– control resources
– deal with hardware outages automatically
– separate auth
– enforce tls
– enforce allocation placement

Very quick notes, really need the video or slides:

  • Use LB, fabio, consul, nomad, docker
  • consul based discovery plugin for elasticsearch
  • nomad 0.5 has “sticky” ephemeral disks
  • use saltstack, trying to reduce amount of CM. just set up nomad, consul, fabio;
  • render job files per cluster; deploys/renews ssl certs
  • immutable infra still hard/slow to deal with

Jeff Mitchell – Vault and Sentinel

This was a bait and switch from Vault at scale which made me sad.

How Sentinel works with Vault.

ACL paths are put into a policy
A policy is attached to a token
at request, policies are combined into an ACL used to check access

ACLs are HCL – JSON but not very expressive.
ACLs are grants/restrictions on paths… which is restrictive

Sentinel extends ACLs with role governing policies and endpoint governing policies
Don’t use root tokens

Building a Privileged Access System In Vault – Lee Briggs – Apptio (jaxxstorm on github)

accessing dbs – root password, app passwords. tracking, sharing, rotating and distribution…
vault can make dynamic mysql creds!
they already had consul and use puppet, so used jsok/puppet-vault and used consul-HA backend
so they deployed vault onto the consul servers
initializing vault – GPG support built in and then init vaults in each DC with the API.
what about vault unsealing?
jaxxstorm/unseal
add your vault servers to a config fule, add your encrypted unseal key, it prompts for your GPG keyring password and unseals all the vaults
so unseal every morning

to config all the DCs
UKHomeOffice/vaultctl, also new terraform vault provider

for mysql config, they run puppet to install db and vault user and roles, then API call to the region’s vault to add as a backend.

to make logins easy, ldap auth with policies… but manual. apptio/breakglass like vault ssh in that it gets ad password and then does all the stuff to get to mysql, ssh, docker

if you’re using consul as a backend then turn on acls and block ports 8500/1
consul snapshot to take backups, test weekly bu restoring to different port vault, connect to the consule, unseal and verify

lessons
– pick 1 thing and vault it
– consul+vault gives you HA for free
– automation has tradeoffs (secret zero problem)
– engineers love the http api

Consul Infrastructure Recipes – Preetha Athan

OK so this went by fast and I didn’t take extensive notes because I knew most of it – but if you are using consul it is absolutely mandatory that you seek this out.  It’ll explain in quick form a dozen different things to do with consul from using consul-template to the one I didn’t know about…

Consul supports prepared queries with complex logic, you can use them e.g. for automated failover as part of service discovery.
So reviews.service.consul -> reviews.query.consul and it execute a query where it can have failover servers and all.

That’s my notes!  I also caught up with some folks from ScaleFT, who are implementing the Google BeyondCorp zero trust model as a product, we are very very interested in it.  Network perimeters and ssh keys?  Busted, like in 1990 busted.  Do something better.

Well I hope that’s enough for ya… I was definitely impressed, found a reason to maybe use terraform, found a reason to maybe use nomad, definitely want to use vault and more consul. I wish sentinel was usable by civilians.  Good crowd, good conference, make sure and watch the videos when they emerge!

Leave a comment

Filed under Conferences, DevOps