After a lovely lunch of sammiches, we kick into the second half of Workshop Day at Velocity 2010. Peco and I (and Jeff and Robert, also from NI) went to Infrastructure Automation with Chef, presented by Adam Jacob, Christopher Brown, and Joshua Timberman of Opscode. My comments in italics.
Chef is a library for configuration management, and a system written on top of it. It’s also a systems integration platform, as we will see later. And it’s an API for your infrastructure.
In the beginning there was cfengine. Then came puppet. Then came chef. It’s the latest in open source UNIXey config management automation.
- Chef is idempotent, which means you can rerun it and get the same result, and it does minimal work to get there.
- Chef is reasonable, and has sane defaults, which you can easily change. You can change its mind about anything.
- Chef is open source and you can hack it easily. “There’s more than one way to do it” is its mantra.
A lot of the tools out there (meaning HP/IBM/CA kinds of things) are heavy and don’t understand how quickly the world changes, so they end up being artifacts of “how I should have built my system 10 years ago.”
It’s based on Ruby. You really need a third gen language to do this effectively; if they created their own config structure it would grow into an even less standard third gen language. If you’re a sysadmin, you do indeed program, and people that say you’re not are lying to you. Apache config is a programming language. Chef uses small composable primitives.
You manage configuration as idempotent resources, which are put together in recipes, and tracked like source code with the end goal of configuring your servers.
Infrastructure as Code
The devops mantra. Infrastructure is code and should be managed with the same rigor. Source control, etc. Chef enables this approach. Can you reconstruct your business from source code, data backup, and bare metal? Well, you can get there.
When you talk about constraints that affect design, one of the largest and almost unstated assumptions nowadays is that it’s really hard to recover from failure. Many aspects of technology and the thinking of technologists is built around that. Infrastructure as code makes that not so true, and is extremely disruptive to existing thought in the field.
Your automation can only be measured by the final solution. No one cares about your tools, they care about what you make with them.
There is a chef client that runs on each server, using recipes to configure stuff. There’s a chef server they can talk to – or not, and run standalone. They call each system a “node.”
They get a bunch of data points, or attributes, off the nodes and you can search them on the server, like “what version of Perl are you running.” “knife” is the command line tool you use to do that.
Nodes have a “run list.” That’s what roles or recipes to apply to a node, in order.
Nodes have “roles.” A role is a description of what a node should be, like “you’re a Web server.” A role has a run list of its own, and attributes to modify them – like “base, apache2, modssl” and “maxchildren=50″.
Chef manages resources on nodes. Resources are declarative descriptions of state. Resources are of type package or service; basically software install and running software. Install software at a given version; run a service that supports certain commands. There’s also a template resource.
Resources take action through providers. A provider is what knows how to actually do the thing (like install a package, it knows to use apt-get or yum or whatever).
Think about it as resources go through a platform to pick a provider.
Recipes apply resources in order. Order of execution is determined by the order they’re listed, which is pretty intuitive. Also, systems that fail within a recipe should generally fail in the same state. Hooray, structured programming!
Recipes can include other recipes. They’re just Ruby. (Everything in Chef is Ruby or JSON). No support for asynchronous actions – you can figure out a way to do it (for file transfers, for example) but that’s really bad for system packages etc.
Cookbooks are packages for recipes. Like “Apache.” They have recipes, assets (like the software itself), and attributes. Assets include files, templates (evaluated with a templating language called ERB), and attributes files (config or properties files). They try to do some sane smart config defaults (like in nginx, workers = number of cores in the box). Cookbooks also have definitions, libraries, resources, providers…
Data bags store arbitrary data. It’s kinda like S3 keyed with JSON objects . “Who all plays D&D? It’s like a Bag of Holding!” They’re searchable. You can e.g. put a mess of users in one. Then you can execute stuff on them. And say use it instead of Active Directory to send users out to all your systems. “That’s bad ass!” yells a guy from the crowd.
Working with Chef
- Install it.
- Create a chef repo. Like by git cloning their stock one.
- Configure knife with a .chef/knife.rb file. There’s a Web UI too but it’s for feebs.
- Download some cookbooks. “knife cookbook site vendor rails -d” gets the ruby cookbook and makes a “vendor branch” for it and merges it in.
- Read the recipes. It runs as root, don’t be a fool with your life.
- Upload them to the server.
- Build a role (knife role create rails).
- Add cloud credentials to knife – it knows AWS, Rackspace, Terremark.
- Launch a new rails server (knife ec2 server create ‘role[rails]‘) – can also bootstrap
- Run it!
- Verify it! knife ssh does parallel ssh and does command, or even screen/tmux/macterm
- Change it by altering your recipe and running again.
This was a little confusing. He started out with a data bag, and it has a bunch of stuff configured in it, but a lot of the stuff in it I thought would be in a recipe or something. I thought I was staying with the presentation well, but apparently not.
The demo goal is good – configure nagios and put in all the hosts without doing manual config.
Well, this workshop was excellent up to here – though I could have used them taking a little more time in “Working with Chef” – but now he’s just flipping from chef file to chef file and they’re all full of stuff that I can’t identify immediately because I’m, you know, not super familiar with Chef. THey really could have used a more “hello world”y demo or at least stepped through all the pieces and explained them (ideally in the same order as the “working with chef” spiel).
Chef 0.8 introduced the “chef shell,” shef. You can run recipes line by line in it.
And then there was a fire alarm! We all evacuate. End of session.
Afterwards, in the gaggle, Adam mentioned some interesting bits, like there is Windows support in the new version. And it does cloud stuff automatically by using the “fog” library. And unicorn, a server for people that know about 200% more about Rails than me. That’s the biggest thing about chef – if you don’t do any other Ruby work it’s a pretty high adoption bar.
One more workshop left for Day 1!