Tag Archives: terraform

The Right Way To Use Tagging In The Cloud

This came up today at work and I realized that over my now-decades of cloud engineering, I have developed a very specific way of using tags that sets both infra dev teams and SRE teams up for success, and I wanted to share it.

Who cares about tags? I do. They are the only persistent source of information you can trust (as much as you can trust anything in this fallen world) to communicate information about an infrastructure asset beyond what the cloud or virtualization fabric it’s running in knows. You may have a terraform state, you may have a database or etcd or something that knows what things are – but those systems can go down or get corrupted. Tags are the one thing that if someone can see the infrastructure – via console or CLI or API or integrated tool – that they can always see. Server names are notoriously unreliable – ideally in a modern infrastructure you don’t reuse servers from one task to another or put multiple workloads on one, but that’s a historical practice that pops up all to often, and server names have character limits (even if they don’t, the management systems around them usually enforce one).

Many powerful tools like Datadog work by exclusively relying on tags. It simplifies operation and prevents errors if, when you add a new production app server, that automatically gets pulled into the right monitoring dashboards and alerting schemes because it is tagged right.

I’ve run very large complex cloud environments using this scheme as the primary means to drive operations.

Top level tag rules:

  1. Tag everything. Tagging’s not just for servers. Every cloud element that can take a tag, tag. Network, disk images, snapshots, lambdas, cloud services, weird little cloud widgets (“S3 VPC endpoint!”).
  2. Use uniform tags. It’s best to specify “all lower case, no spaces” and so on. If people decide to word a tag slightly differently in two places, the value is lost. Both the key and the value, but especially the key – teach people that if you say “owner” that means “owner” not “Owner” and “owning party” and whatever else.
  3. Don’t overtag with attributes you can easily see. Instance size, what AZ it’s in, and so on is already part of the cloud metadata so it’s inefficient to add tags for it.
  4. Use standard tags. This is what I’ll cover in the rest of this article.

At the risk of oversimplifying, you need two things out of your systems environment – compliance and management. And tags are a great way to get it.

Compliance

Attribution! Cost! Security! You need to know where infrastructure came from, who owns it, who’s paying for it, and if it’s even supposed to be there in the first place.

Who owns it?

Tag all cloud assets with an owner (email address) basically whatever is required to uniquely identify who owns an asset. Should be a team email for persistent assets, if it’s a personal email then the assumption should be if that person leaves the company those assets get deleted (good for sandboxes etc). 

The amount of highly paid engineer time I’ve seen wasted over the last decade of people having to go out and do cattle calls of “Hey who owns these… we need to turn some off for cost or patch them for security or explain them for compliance… No really, who owns these…” is shocking.

owner:myteam@mycompany.com

Who’s paying for it

This varies but it’s important. “Owner” might not be sufficient in an environment – often some kind of cost allocation code is required based on how your company does finances. Is it a centralized expense or does it get allocated to a client? Is it a production or development expense, those are often handled differently from a finance perspective. At scale you may need a several-parter – in my current consulting job there’s a contract number but also a specific cost code inside that contract number that we need all expenses divvied up between.

billing:CUCT30001

Where did it come from

Traceability both “up” and “down” the chain. When you go look at a random cloud instance, even if you know who it belongs to you can’t tell how it got there. Was it created by Terraform? If so where’s the state file? Was it created via some other automation system you have? Github? Rundeck? Custom python app #25?

Some tools like Cloudformation do this automatically. Otherwise, consider adding a source tag or set of tags with sufficient information to trace the live system back to the automation. Developers love tagging git commits and branches with versions and JIRA tickets and release dates and such, same concept applies here. Different things make sense depending on your tech stack – if you GitOps everything then the source might be a specific build, or you want to say which s3 bucket your tfstate is in… Here as an example, I’m working with a system that is terraform instantiated from a gitops pipeline so I’ve made a source tag that says github and then the repo name and then the action name. And for the tfstate I have it saved in an s3 bucket named “mystatebucket.”

source:github/myapp/deploy-action
sourcestate:s3/mystatebucket

When does it go

OK, I know the last two sound like the lyrics to “Cotton-Eyed Joe”, which is a bonus. But a major source of cost creep is infrastructure that was intended to be there for a short time – a demo, a dev cycle – that ends up just living forever. And sure, you can just send nag-o-grams to the owner list, but it’s better to tag systems with an expires tag in date format (ideally YYYY-MM-DD-HH-MM as God intended). “expires:never” is acceptable for production infrastructure, though I’ve even used it on autoscaling prod infrastructure to make sure systems get turned over and don’t live too long.

expires:2025-02-01-00-00-00
or
expires:never

Management

Operations! Incidents! Cost and security again! Keep the entire operational cycle, including “during bad production incidents”, in mind when designing tags. People tear down stacks/clusters, or go into the console and “kill servers”, and accidentally leave other infrastructure – you need to be able to identify and clean up orphaned assets. Hackers get your AWS key and spin up a huge volume of bitcoin miners. Identifying and actioning on infrastructure accurately and efficiently is the goal.

As in any healthy system, the “compliance” tags above aren’t just useful to the beancounters, they’re helpful to you as a cloud engineer or SRE. But beyond that, you want a taxonomy of your systems to use to manage them by grouping operations, monitoring, and so on.

This scheme may differ based on your system’s needs, but I’ve found a general formula that fits in most cases I come across. Again, it assumes virtual systems where servers have one purpose – that’s modern best practice. “Sharing is the devil.”

EARFI

I like to pronounce this “errr-feee.” It’s a hierarchy to group your systems.

  • environment – What environment does this represent to you, e.g. dev, test, production, as this is usually the primary element of concern to an operator. “environment:uat” vs “environment:prod”.
  • application – What application or system is this hosting? The online banking app? The reporting system? The security monitoring server? The mobile game backend? GenAI training? “application:banking”.
  • role – What function does this specific server perform? Webserver dbserver, appserver, kafka – systems in an identical role should have identical loadouts. “role:apiserver” vs “role:dbserver”. Keep in mind this is a hierarchy and you won’t have guaranteed uniqueness across it – for example, “application:banking,role:dbserver” may be quite different from “application:mobilegame,role:dbserver” so you would usually never refer to just “role:dbserver.”
  • flavor – Optional, but useful in case you need to differentiate something special in your org that is a primary lever of operation (Windows vs Linux?  CPU vs GPU nodes in the same k8s cluster? v2 vs v2?). I usually find there’s only one of these (besides of course region and things you shouldn’t tag because they are in other metadata). For our apiserver example, consider that maybe we have the same code running on all our api servers but via load balancer we send REST queries to one set and SOAP queries to another set for caching and performance reasons. “flavor:rest” vs “flavor:soap”.
  • instance – A unique identifier among identical boxes in a specific EARF set, most commonly just an integer. “instance:2”. You could use a GUID if you really need it but that’s a pain to type for an operator.

This then allows you to target specific groups of your infrastructure, down to a single element or up to entire products.  

  • “Run this week’s security patches on all the environment:uat, application:banking, role:apiserver, flavor:rest servers.” Once you verify, you can do the same on environment:prod.”
  • “The second of the three servers in that autoscaling group is locked up. Terminate environment:uat, application:banking, role:apiserver, flavor:rest, instance:2
  • “We seem to be having memory problems on the apiservers. Is it one or all of the boxes? Check the average of environment:prod, application:banking, role:apiserver, flavor:rest and then also show it broken down by instance tag. It’s high on just some of the servers but not all? Try flavor:rest vs flavor:soap to see if it’s dependent on that functionality. Is it load do you think? Compare to the aggregate of environment:uat to see if it’s the same in an idle system.”
  • “Set up an alert for any environment:prod server that goes down. And one for any environment:prod, application:banking, role:apiserver that throws 500 errors.”
  • “Security demands we check all our DB servers for a new vulnerability. Try sending this curl payload to all role:dbservers, doesn’t matter what application. They say it won’t hurt anything but do it to environment:uat before environment:prod for safety.”

So now a random new operator gets an alert about a system outage and logs into the AWS console and sees not just “i-123456 started 2 days ago,” they see

owner:myteam@mycompany.com
billing:CUCT30001
source:github/myapp/deploy-action
sourcestate:s3/mystatebucket

expires:never
environment:prod

application:mobilegame
role:dbserver
flavor:read-only
instance:2


That operator now has a huge amount of information to contextualize their work, that at best they’d have to go look up in docs or systems and at worst they’d have to just start serially spamming. They know who owns it, what generates it, what it does and has hints at how important it is. (prod – probably important. A duplicate read secondary – could be worse.) And then runbooks can be very crisp about what to do in what situation by also using the tags. “If the server is environment:prod then you must initiate an incident <here>… If the server is a role:dbserver and a role:read-only it is OK to terminate it and bring up a new one but then you have to go run runbook <X> and run job <y> to set it up as a read secondary…”

Feel free and let me know how you use tags and what you can’t live without!

Leave a comment

Filed under Cloud, DevOps, Monitoring

HashiConf 2017

As part of our embarrassment of conference riches here in Austin this year, I just went to HashiConf 2017 last week (Sept. 19-20).  HashiConf is the company conference for HashiCorp, the guiding hand behind a whole set of cool open source projects used by many newfangled technorati. We use many of these at AlienVault and so I went to see what’s hot and new!  If you’re not familiar, here’s the open source tools Hashi runs:

  • Vagrant – Run multi-server development environments on your laptop. We’ve mostly moved on to using Docker Compose for this but if you’re not using docker yet it’s mandatory!
  • Packer – A utility for building VM images, including AWS AMIs, Azure, etc.  At AlienVault we use this extensively to bake images for virtual appliances.
  • Terraform – An infrastructure as code tool, like AWS CloudFormation but cross-cloud. We use this on one of our products; I personally don’t have a lot of experience with it though.
  • Consul – A distributed configuration store and service discovery tool.  I did a consul setup for one of our products.
  • Vault – a secret store – it stores credentials encrypted but can also dynamically provision them. I am very interested in us starting to use this.
  • Nomad – a cluster scheduler – does more than Amazon ECS but less than Kubernetes.

I wanted to go to the Vault training but it was sold out by the time I managed to poke the sullen beast sufficiently to get registered.

The good news is that all of the sessions will allegedly have video posted publicly at some point!  So you won’t have to rely on my notes below, which is good.  Here’s the conference schedule.  All the session videos are now available as a YouTube playlist at:

If I were some kind of paid blogger or this was some ad-driven site I’d say “We’re done with part one, come back next post for more!”  But I’m not, so read on to get all my debrief from the conference. If it’s too long, you’re too old!!!

Day One

The Day 1 keynote was packed with info.  It started with Mitchell Hashimoto (the Hashicorp founder, as you might suspect from the name) talking about the mad growth they’ve seen – in 5 years since founding they’re up to 130 employees, conducted 150 releases across their 6 products last year, and had 22 million downloads of these tools in the last year (1.5M in the last month).  Makes sense to me; most people I know use at least one Hashi product even if it’s just vagrant or packer.

They have Terraform, Nomad, Vault, and Consul Enterprise (hosted) offerings.

Let me take an aside to say – while it’s admirable they didn’t spend most of the conference pushing their paid products (I’m looking at you Dockercon), there was so little information on them that it was confusing.  Back a year ago when I started in on a Consul implementation I poked around their site trying to figure out their hosted offering (called Atlas at the time) and was basically thwarted. And it’s not much better now.  What do these give you?  What do they cost?  I had a *lot* of hallway conversations with people who similarly had tried to look into them and been rebuffed.  “It’s like buying a Ferrari,” said one person who I’ll leave anonymous. “There’s no list price, calling them just starts some conversation about whether you should really be a customer of theirs or not.”  Sucks, I like using hosted services rather than rolling my own if it at all makes financial sense, but who has time for that?

Anyway, so they started working their way down the products to make announcements. They’re now marketing these as “The Hashistack!”

Terraform

It’s going gangbusters and they added lots of features including a UI console, imports from external and local variables, stability, remote backends, and more.  I can testify to this, before the remote backends and you had to keep state in a file – that sucked.  (For those of you familiar with AWS and not Terraform, let me explain – CloudFormation has two parts to it – the actual templates and then the storage of system state.  The AWS fabric itself keeps the state for you and you get it via API.  Since Terraform is cloud agnostic the tool needs to store state itself; and instead of starting with a database or whatnot they just started with a file and have expanded from there.  Now you can do S3, consul, various other stuff.)

Their big announcement is the Terraform Module Registry.  I like to call it the TerrorHub or even the TerrorDome (with apologies to Public Enemy). Basically, you know, where people share stuff in the community like dockerhub.

I find this super interesting.  Done right, repos like this are a huge force multiplier for modern tools.  On the Open Threat Exchange, I’ve been happy just to use AWS CloudFormation.  We’re not interested in cross-cloud at all, and we use changesets and exports and other modern CF things, so going to Terraform wouldn’t really get us much -maybe some extra modularity, but we do continuously deployed microservices and things are factored out where we don’t need a giant ass template anyway. But, AWS does a crappy job of providing and enabling the community to provide decent CloudFormation templates. They try – but they’re hard to find in the giant rat’s nest of AWS docs, they are generally somewhat problematic (e.g. the quickstart Mongo CF they provide has a node hardcoded to be the primary and the others to be secondaries – but mongo works by consensus elections of the primary man!!!). Now, it’s possibly to bork this up, the Chef community had fragmentation around this till they finally put together the Chef Supermarket in 2015; we use Puppet at AV currently and at the time we selected it with the deciding factor being easier access to high quality modules for re-use.

They had Corey Sanders from Microsoft show off their Azure console that now has an embedded “Azure Cloud Shell” (just a container they expose via browser) and now it’s coming pre-loaded with Terraform!  Azure’s been Johnny on the spot with docker and Linux and Hashi stuff over the last few years.

Terraform Enterprise apparently has a lovely visual viewer for workspaces and is API everything and will have general signups open by EOY.

Vault

In an automated environment, “don’t keep your passwords in source” is really kind of a laughable lie.  I mean, you can not hardcode them and insert them from something else, something properly also kept in source control…  Hence Vault.  Vault is an internal project they open sourced 2.5 years ago.  Network configurations have become increasingly complex – hey did you know network perimeters aren’t secure any more?
Secret management, encryption as a service, privileged access management are all critical needs, Vault has added on bunches of functionality to them – and its secure plugins open it up to you!

Dan McTeer from Adobe came out to describe how his team makes self service security solutions as an internal platform (using Vault amongst other things) so the many, many other Adobe ops teams don’t have to waste time reinventing the wheel.

Vault 0.8.3 has tight and easy kubernetes integration. Kubernetes has a secret store but… Not a good one. (And before any k8s people get their shorts in a wad, the “take it all or leave it all” attitude from some k8s people is why some folks are sticking with more composable solutions. Don’t say your components are pluggable out one side of your mouth and then give people flak for doing it out the other.)

Consul

Consul can do so many things it’s hard to talk about sometimes. “Service discovery and distributed config store?” Hard for novices to get their heads around. There’s a session I’ll describe later that does a great job of demystifying it.  For now, note that they added dead server cleanup and other new functionality.

Since Consul is built on very academic stuff like SWIM and RAFT, they’ve started a research arm and have published their first paper on Lifeguard: SWIM-ming with Situational Awareness which reduces gossip protocol false positives by 98%.

And, they have just gone 1.0 with Consul!  So it’s battle tested, mother approved.

Nomad

Nomad’s their batch and service scheduler.  I have to admit, I went into the conference with a “bah who cares” attitude about Nomad, but afterwards I think it has its points.

Previous reviews I read compare it unfavorably against Kubernetes and Swarm. To be fair they are deliberately smaller and more composable, which is why we are still on ECS and haven’t taken the big ol’ step to Kubernetes ourselves yet. (Kubernetes: The OpenStack of 2017).

Anyway, new stuff.

  • Everything’s API and CLI – so now they have a UI integrated into the binary, like Consul!
  • Role based ACL policies driven by ACL tokens. “IAM for Nomad.”
  • Citadel passed 1M containers under Nomad
  • Nomad Enterprise has native namespacing for larger shared environments.

The Big Secret Announcement

Mitchell gives us a big ol’ windup. In terms of total automation we went from VMs and config management to cloud and IoC to containers and microservices to schedulers… What’s next?
How do you enforce rules? Forbid changing config outside work hours (consul)… Ensure all services have health checks (nomad)… Ensure all TLS certs are 2048 bits (vault)… Ensure all AWS instances are tagged (terraform)…
Announcing Sentinel – a policy as code engine suitable for use in a continuous delivery pipeline.

  • Define and version your compliance rules, test and automate them.
  • Language: Easy to learn and use, mostly one liners, easy logic.
  • Can do active enforcement (block) and passive (check).
  • Levels of fail – advisory, soft, hard.
  • Workflow (“simulator”) – can mock and test locally.
  • Plugins as imports.
  • Terraform Enterprise uses this, so like you can make rules before the plan and apply to validate changes
  • Consul Enterprise, use during KV modifies or service registrations
  • Vault Enterprise, role and endpoint governing policies
  • Nomad Enterprise, run on job create/update introspecting on their details (e.g. artifacts only come from this repo)

It wasn’t immediately clear if this only worked with the Enterprise offerings or not, but it appears (after talking to other confused folks over the 2 days) that the answer is yes.  Well, that sucks, as it limits adoption to the 1% who care of the 1% who have managed to get any Enterprise offerings.   Oh well.

I’ve been asked if this is like Chef’s InSpec.  Kinda, but InSpec is for compliance rules “inside the box” and Sentinel is for rules about the system, just like Chef is to Cloudformation. So IMO they’d be super complementary.

Finally, Dave McJannet, the CEO, came on to talk about the Hashicorp Partner Network for resellers and integrators.

Guiding principles:
  • Workflows not technologies
  • Automation through codification
  • Open and extensible

All right, that was the keynote!  Don’t worry, I didn’t take that many notes from other things.

Session One – Sentinel

Having just heard about Sentinel, and not being clear yet I couldn’t use it if not on all Enterprise offerings, I skedaddled over to a session on it. Again, these will eventually be online so I’m not trying to duplicate whole talks and demos in text, this is to give you some flavor and opinions.

Defining, communicating, implementing, and auditing policy gets complex with scale. And it then becomes a huge source of friction on real operational work, despite attempts to document it. So just like we’ve fixed similar problems, let’s do it as code.

Then he says all the things Mitchell did in the keynote, but slower.

  • Example policy: Terraform can’t execute when consul healthchecks are failing
  • Yay golang!
  • Made their own simplistic DSL.
  • We’re 15 minutes into a 40 minute slot. I kinda want to switch to the terraform session, but I know from the attendance here that it’s standing room only over there.

import “time” <- you can import libs it knows how to get info from (sockaddr, tfplan, job)
main = rule { time.pst.hour is not 3 } <- functioney syntax, rules are the main thing, plain english “or”s and stuff
if an element returns undefined that’s ok
variables, dynamic typing

types of policy – advisory (just tells ya), soft (can be overridden), hard (no screw you)

writing and testing policies, you can use their simulator, e.g. “sentinel test”
Demo!
sentinel apply <file>
make test folder (test/<policy>/<test>.json
config, mock, say what rule you want to test, then it’ll run all the tests with “sentinel test <policy>”

It integrates with nomad enterprise, and he shows how you accidentally change a deployment count too low and it fails the rule and doesn’t happen.

Session Two – Vault

Liz Rice of Aqua (@aquasecteam) has a product written around Vault.

How to handle secret attributes and lifecycle in docker? Passing secrets into containers goes from

Bad:

  • in source code
  • in image/dockerfile

To less bad but still bad:

  • Env vars – can exec in and see it, can docker inspect it and see it, can cat /proc/<pid>/environ, leak into logs…
  • mount volume with directory with secrets – tempfs is in memory. can still get it via docker exec and /proc.

Docker orchestrator support for secrets:

  • Nomad + Vault – secrets passed as files, tasks get tokens to retrieve values
  • Docker – swarm has service support but not for pocs. rotation needs restart though, and secrets go into the raft log encrypted -but the key is right there unless you lock your swarm.
  • kubernetes – in a pod yaml, namespaced, as vol or env var, can turn on RBAC with —authorization-mode RBAC. Stored in etcd and you have to make sure it’s encrypted.

A docker -pluggable secret backend is in progress.
In kubernetes – use kubernetes-vault
For others – Aqua Security secrets management!
With aqua you can’t get the secret via inspect or /proc, and it has audit logs n stuff.
tiny.cc/secrets

Session Three – Enterprise Security with Hashistack – Palantir

Ooo this one was good.  Pic!

2017-09-19 14.36.47

The Hashistack

So you need to:

  • Secure your infrastructure
  • Secure your configuration
  • Make the secure thing the easy thing to do
    it gets pretty complicated fast.

There are 6 pillars of security in your infrastructure:

  1. encryption
  2. access segmentation
  3. patching
  4. centralized logging
  5. mfa
  6. defensive backups

He will skip talking about #1 and #2 because they are obvious and boring.

Patching

MalwareBytes shows: It now takes 4 days on average to weaponize new exploits. Patching needs to be in phases, alert on failures, roll back, run daily
They use immutable AMIs with packer/terraform. They burn a common AMI with vault/consul/nomad/filebeat and just turn on what’s needed on a given server. Versions are kept in a variable.json file.
Their terraform lays out 3 ASGs for nomad/consul, vault, and then nomad workers.

How to bounce a hashistack:

  • delete node/add node for consul/nomad
  • like that with stepdown for vault
  • nomad workers – add new ones, drain old ones
  • Ain’t nobody got time for that.

They wrote “bouncer” to do that easily and automatically, open sourced at github/palantir/bouncer. So you have it rebuild and roll in new versions daily.  Ta da!

Logs

You need them to do incident response. You want <5 min latency, many formats, long retention, source identified – and opt-out not opt-in (in other words you get all the damn logs and not just the 3 you know about).

They started with workers shipping logs to a SIEM. Then they shipped to logstash which sends to multiple locations. They have a custom rsyslog template that puts everything into json, because logging in json is always better.  [If you don’t believe that you will be tied to a chair and Charity Majors will bludgeon you with a whiskey bottle until you are reeducated. -Ed.]

Then they go journald to rsyslog for nomad etc. Other ones, filebeats watches. Telemetry should monitor high risk files and get insights into running procs, kernel versions, etc.

MFA

MFA on everything?  Sure! But… It’s bullshit. The process of getting logged in and the number of logins and MFAs you have to do to go from “sitting down at your desk” to “logged into the necessary box” is ridiculous.

2017-09-19 14.50.20

MFA is Bullshit

Need to simplify finding hosts, ssh’ing.
They wrote a go wrapper around sesh, aws, and vault commands called vault-ssh-helper.  They also use a duo pam module and a yubikey (way faster than phone mfa).
from laptop, get vault token which then are used to look up aws creds, then used to look up ssh, one MFA later you’re in.  ssh/mfa becomes the easy path.

Defensive Backups

Use a separate backup account with you not the admin. back up storage by default, not opt in.  You need those backups available for testing too.
So, RDS. We have a job that makes a snapshot, shared with a second account. A lambda in the other account reads it, makes a copy, and shares it back.
It’s run from a nomad batch job, assumes all RDS and S3 needs to be backed up.

Securing nomad jobs and vault policies

First, gate on checkin, and second, stop bad code.
gpg sign commits – sign with yubikey
or, duobot (https://github.com/palantir/duo-bot) calls the duo api and sends you a push to your mobile.
They built some bots to automate all this, approver, duo-bot (mfa absed deploy) and bulldozer (auto-merge if status green) – (github)
Unit tests for jobs – naming schemes, folder structure, health checks
CI converges to what’s in the repo hourly

Experimenting in prod with immutable infrastructure – Tim Perrett

Slides here (slideshare login required): https://www.slideshare.net/timperrett/online-experimentation-with-immutable-infrastructure

Converged infrastructure – you know, scheduling workloads.
New table stakes – fast, observable, self service, auto cleanup.
Testing in prod is a reality.  Here, he avoids using any of the common “test in prod” memes, for which I salute him.
Emergent behaviors of large systems – conways law and hyrum’s law – and  microservice complexity means that local testing is still a fable. Scaling peaks, human error can’t be locally simulated.

On their site, user calls gets segmented at edge (opentrace, zipkin), routed to the right backend, and publish its telemetry for analysis. A front end envoy proxy to split to proxies that go to edges

Feature flags are inferior!
Data plane for Routing – envoy, linkerd, nginx+. Integration with the control plane is important.

Options: Envoy in container? Sidecar as proxy? Host based envoy?

Control Plane itsisomething or…  I lost some of it here.

Instrument with segment identifiers, make apps experiment aware, interesting data is usually wrong.

That was all the notes I took from Day 1.  I also spent time talking with people I knew, some vendors (Amazon, Hashi, Datadog, a new APM vendor called Instana…)

Day Two

Well, the scheduled keynoter dropped, so we got a quick sub:

The Ecological Impact Of Compute – Seth Vargo

Wholesale colos are hideously less energy efficient – once adjusted by volume, they use 49% vs cloud’s 5% of the energy picture.

2017-09-20 09.44.26
Due to overprovisioning, invalid IT procurement focus (bulk), unused “zombie” servers, lack of standard utilization metric
IT managers can’t identify owners for 15-30% of the servers but are reluctant to decomm them!
Electric costs are paid by someone else.
Priority of efficiency low compared to deploy, avail, security, reliability
Schedulers help with this.  And use the cloud you poor benighted momos.  (I added that last part)

Keynote by Kelsey Hightower! On Hashinetes!

Kelsey is obviously huge into Kubernetes but he loves the Hashistack too.  And sometimes he uses both nomad and kubernetes at the same time?  CRAZY EH???

He presents some info from circleci on why they use kubernetes and nomad both (kubernetes good for containers, nomad does non-containers)

Demo time!

  • dynamic credential provisioning via vault – vault makes a mysql username, the app uses it, then vault deletes it!
  • he tells his android phone “create nomad” and it deploys it in k8s
  • make your nomad worker pool in the cloud, not on K8S

Elasticsearch as a Service Using the Hashistack by Yorick Gersie

EBay classifieds runs a buttload of single tenant Elasicsearches.  So do we, which made me very interested in his talk.

They spin it up with terraform, nomad, etc.
challenges
– discover master nodes
– direct traffic to right cluster
– port conflicts
– data persistency
– control resources
– deal with hardware outages automatically
– separate auth
– enforce tls
– enforce allocation placement

Very quick notes, really need the video or slides:

  • Use LB, fabio, consul, nomad, docker
  • consul based discovery plugin for elasticsearch
  • nomad 0.5 has “sticky” ephemeral disks
  • use saltstack, trying to reduce amount of CM. just set up nomad, consul, fabio;
  • render job files per cluster; deploys/renews ssl certs
  • immutable infra still hard/slow to deal with

Jeff Mitchell – Vault and Sentinel

This was a bait and switch from Vault at scale which made me sad.

How Sentinel works with Vault.

ACL paths are put into a policy
A policy is attached to a token
at request, policies are combined into an ACL used to check access

ACLs are HCL – JSON but not very expressive.
ACLs are grants/restrictions on paths… which is restrictive

Sentinel extends ACLs with role governing policies and endpoint governing policies
Don’t use root tokens

Building a Privileged Access System In Vault – Lee Briggs – Apptio (jaxxstorm on github)

accessing dbs – root password, app passwords. tracking, sharing, rotating and distribution…
vault can make dynamic mysql creds!
they already had consul and use puppet, so used jsok/puppet-vault and used consul-HA backend
so they deployed vault onto the consul servers
initializing vault – GPG support built in and then init vaults in each DC with the API.
what about vault unsealing?
jaxxstorm/unseal
add your vault servers to a config fule, add your encrypted unseal key, it prompts for your GPG keyring password and unseals all the vaults
so unseal every morning

to config all the DCs
UKHomeOffice/vaultctl, also new terraform vault provider

for mysql config, they run puppet to install db and vault user and roles, then API call to the region’s vault to add as a backend.

to make logins easy, ldap auth with policies… but manual. apptio/breakglass like vault ssh in that it gets ad password and then does all the stuff to get to mysql, ssh, docker

if you’re using consul as a backend then turn on acls and block ports 8500/1
consul snapshot to take backups, test weekly bu restoring to different port vault, connect to the consule, unseal and verify

lessons
– pick 1 thing and vault it
– consul+vault gives you HA for free
– automation has tradeoffs (secret zero problem)
– engineers love the http api

Consul Infrastructure Recipes – Preetha Athan

OK so this went by fast and I didn’t take extensive notes because I knew most of it – but if you are using consul it is absolutely mandatory that you seek this out.  It’ll explain in quick form a dozen different things to do with consul from using consul-template to the one I didn’t know about…

Consul supports prepared queries with complex logic, you can use them e.g. for automated failover as part of service discovery.
So reviews.service.consul -> reviews.query.consul and it execute a query where it can have failover servers and all.

That’s my notes!  I also caught up with some folks from ScaleFT, who are implementing the Google BeyondCorp zero trust model as a product, we are very very interested in it.  Network perimeters and ssh keys?  Busted, like in 1990 busted.  Do something better.

Well I hope that’s enough for ya… I was definitely impressed, found a reason to maybe use terraform, found a reason to maybe use nomad, definitely want to use vault and more consul. I wish sentinel was usable by civilians.  Good crowd, good conference, make sure and watch the videos when they emerge!

1 Comment

Filed under Conferences, DevOps