Tag Archives: Cloud

Amazon CloudFormation: Model Driven Automation For The Cloud

You may have heard about Amazon’s newest offering they announced today, CloudFormation.  It’s the new hotness, but I see a lot of confusion in the Twitterverse about what it is and how it fits into the landscape of IaaS/PaaS/Elastic Beanstalk/etc. Read what Werner Vogels says about CloudFormation and its uses first, but then come back here!

Allow me to break it down for you and explain why this is such a huge leverage point for cloud developers.

What Has Come Before

Up till now on Amazon you could configure up a single virtual image the way you wanted it, with an AMI. You could even kind of construct a scalable tier of similar systems using Auto Scaling, by defining Launch Configurations. But if you wanted to construct an entire multitier system it was a lot harder.  There are automated configuration management tools like chef and puppet out there, but their recipes/models tend to be oriented around getting a software loadout on an existing system, not the actual system provisioning – in general they come from the older assumption you have someone doing that on probably-physical systems using bcfg2 or cobber or vagrant or something.

So what were you to do if you wanted to bring up a simple three tier system, with a Web tier, app server tier, and database tier?  Either you had to set them up and start them manually, or you had to write code against the Amazon APIs to explicitly pull up what you wanted. Or you had to use a third party provisioning provider like RightScale or EngineYard that would let you define that kind of model in their Web consoles but not construct your own model programmatically and upload it. (I’d like my product functionality in my own source control and not your GUI, thanks.)

Now, recently Amazon launched Elastic Beanstalk, which is more way over on the PaaS side of things, similar to Google App Engine.  “Just upload your application and we’ll run it and scale it, you don’t have to worry about the plumbing.” Of course this sharply limits what you can do, and doesn’t address the question of “what if my overall system consists of more than just one Java app running in Beanstalk?”

If your goal is full model driven automation to achieve “infrastructure as code,” none of these solutions are entirely satisfactory. I understand CloudFormation deeply because we went down that same path and developed our own system model ourselves as a response!

I’ll also note that this is very similar to what Microsoft Azure does.  Azure is a hybrid IaaS/PaaS solution – their marketing tries to say it’s more like Beanstalk or Google App Engine, but in reality it’s more like CloudFormation – you have an XML file that describes the different roles (tiers) in the system, defines what software should go on each, and lets you control the entire system as a unit.

So What Is CloudFormation?

Basically CloudFormation lets you model your Amazon cloud-based system in JSON and then provision and control it as a unit.  So in our use case of a three tier system, you would model it up in their JSON markup and then CloudFormation would understand that the whole thing is a unit.  See their sample template for a WordPress setup. (A mess more sample templates are here.)

Review the WordPress template; it lets you define the AMIs and instance types, what the security group and ELB setups should be, the RDS database back end, and feed in variables that’ll be used in the consuming software (like WordPress username/password in this case).

Once you have your template you can tell Amazon to start your “stack” in the console! It’ll even let you hook it up to a SNS notification that’ll let you know when it’s done. You name the whole stack, so you can distinguish between your “dev” environment and your “prod” environment for example, as opposed to the current state of the Amazon EC2 console where you get to see a big list of instance IDs – they added tagging that you can try to use for this, but it’s kinda wonky.

Why Do I Want This Again?

Because a system model lets you do a number of clever automation things.

Standard Definition

If you’ve been doing Amazon yourself, you’re used to there being a lot of stuff you have to do manually.  From system build to system build even you do it differently each time, and God forbid you have multiple techies working on the same Amazon system. The basic value proposition of “don’t do things manually” is huge.  You configure the security groups ONCE and put it into the template, and then you’re not going to forget to open port 23 AGAIN next time you start a system. A core part of what DevOps is realizing as its value proposition is treating system configuration as code that you can source control, fix bugs in and have them stay fixed, etc.

And if you’ve been trying to automate your infrastructure with tools like Chef, Puppet, and ControlTier, you may have been frustrated in that they address single systems well, but do not really model “systems of systems” worth a damn.  Via new cloud support in knife and stuff you can execute raw “start me a cloud server” commands but all that nice recipe stuff stops at the box level and doesn’t really extend up to provisioning and tracking parts of your system.

With the CloudFormation template, you have an actual asset that defines your overall system.  This definition:

  • Can be controlled in source control
  • Can be reviewed by others
  • Is authoritative, not documentation that could differ from the reality
  • Can be automatically parsed/generated by your own tools (this is huge)

It’s also nicely transparent; when you go to the console and look at the stack it shows you the history of events, the template used to start it, the startup parameters it used… Moving away from the “mystery meat” style of system config.

Coordinated Control

With CloudFormation, you can start and stop an entire environment with one operation. You can say “this is the dev environment” and be able to control it as a unit. I assume at some point you’ll be able to visualize it as a unit, right now all the bits are still stashed in their own tabs (and I notice they don’t make any default use of their own tagging, which makes it annoying to pick out what parts are from that stack).

This is handy for not missing stuff on startup and teardown… A couple weeks ago I spent an hour deleting a couple hundred rogue EBSes we had left over after a load test.

And you get some status eventing – one of the most painful parts of trying to automate against Amazon is the whole “I started an instance, I guess I’ll sit around and poll and try to figure out when the damn thing has come up right.”  In CloudFront you get events that tell you when each part and then the whole are up and ready for use.

What It Doesn’t Do

It’s not a config management tool like Chef or Puppet. Except for what you bake onto your AMI it has zero software config capabilities, or command dispatch capabilities like Rundeck or mcollective or Fabric. Although it should be a good integration point with those tools.

It’s not a PaaS solution like Beanstalk or GAE; you use those when you just have an app you want to deploy to something that’ll run it.  Now, it does erode some use cases – it makes a middle point between “run it all yourself and love the complexity” and “forget configurable system bits, just use PaaS.”  It allows easy reusability, say having a systems guy develop the template and then a dev use it over and over again to host their app, but with more customization than the pure-play PaaSes provide.

It’s not quite like OVF, which is more fiddly and about virtually defining the guts of a single machine than defining a set of systems with roles and connections.

Competitive Analysis

It’s very similar to Microsoft Azure’s approach with their .cscfg and .csdef files which are an analogous XML model – you really could fairly call this feature “Amazon implements Azure on Amazon” (just as you could fairly call Elastic Beanstalk “Amazon implements Google App Engine on Amazon”.) In fact, the Azure Fabric has a lot more functionality than the primitive Amazon events in this first release. Of course, CloudFormation doesn’t just work on Windows, so that’s a pretty good width vs depth tradeoff.

And it’s similar to something like a RightScale, and ideally will encourage them to let customers actually submit their own definition instead of the current clunky combo of ServerArrays and ServerTemplates (curl or Web console?  Really? Why not a model like this?). RightScale must be in a tizzy right now, though really just integrating with this model should be easy enough.

Where To From Here?

As I alluded, we actually wrote our own tool like this internally called PIE that we’re looking to open source because we were feeling this whole problem space keenly.  XML model of the whole system, Apache Zookeeper-based registry, kinda like CloudFormation and Azure. Does CloudFormation obsolete what we were doing?  No – we built it because we wanted a model that could describe cloud systems on multiple clouds and even on premise systems. The Amazon model will only help you define Amazon bits, but if you are running cross-cloud or hybrid it is of limited value. And I’m sure model visualization tools will come, and a better registry/eventing system will come, but we’re way farther down that path at least at the moment. Also, the differentiation between “provisioning tools” that control and start systems like CloudFormation and bcfg2 and “configuration” tools that control and start software like Chef and Puppet (and some people even differentiate between those and “deploy” tools that control and start applications like Capistrano) is a false dichotomy. I’m all about the “toolchain” approach but at some point you need a toolbelt. This tool differentiation is one of the more harmful “Dev vs Ops” differentiations.

I hope that this move shows the value of system modeling and helps people understand we need an overarching model that can be used to define it all, not just “Amazon” vs “Azure” or “system packages” vs “developed applications” or “UNIX vs Windows…” True system automation will come from a UNIVERSAL model that can be used to reason about and program to your on premise systems, your Amazon systems, your Azure systems, your software, your apps, your data, your images and files…

Conclusion

You need to understand CloudFormation, because it is one of the most foundational changes that will have a lot of leverage that AWS has come out with in some time. I don’t bother to blog about most of the cool new AWS features, because they are cool and I enjoy them but this is part of a more revolutionary change in the way systems are managed, the whole DevOps thing.

7 Comments

Filed under Cloud, DevOps

Application Performance Management in the Cloud

Cloud computing has been the buzz and hype lately and everybody is trying to understand what it is and how to use it. In this post, I wanted to explore some of the properties of “the cloud” as they pertain to Application Performance Management. If you are new to cloud offerings, here are a few good materials to get started, but I will assume the reader of this post is somewhat familiar with cloud technology.

As Spiderman would put it, “With great power comes great responsibility” … Cloud abstracts some of the infrastructure parts of your system and gives you the ability to scale up resource on demand. This doesn’t mean that your applications will magically work better or understand when they need to scale, you still need to worry about measuring and managing the performance of your applications and providing quality service to your customers. In fact, I would argue APM is several times more important to nail down in a cloud environment for several reasons:

  1. The infrastructure your apps live in is more dynamic and volatile. For example, cloud servers are VMs and resources given to them by the hypervisor are not constant, which will impact your application. Also, VMs may crash/lock up and not leave much trace of what was the cause of issues with your application.
  2. Your applications can grow bigger and more complex as they have more resources available in the cloud. This is great but it will expose bottlenecks that you didn’t anticipate. If you can’t narrow down the root cause, you can’t fix it. For this, you need scalable APM instrumentation and precise analytics.
  3. On-demand scaling is a feature that is touted by all cloud providers but it all hinges on one really hard problem that they won’t solve for you: Understanding your application performance and workload behave so that you can properly instruct the cloud scaling APIs what to do. If my performance drops do I need more machines? How many? APM instrumentation and analytics will play a key part of how you can solve the on-demand scaling problem.

So where is APM for the cloud, you ask? It is in its very humble beginnings as APM solution providers have to solve the same gnarly problems application developers and operations teams are struggling with:

  1. Cloud is dynamic and there are no fixed resources. IP addresses, hostnames, and MAC addresses change, among other things. Properly addressing, naming and connecting the machines is a challenge – both for deep instrumentation agent technology and for the apps themselves.
  2. Most vendors’ licensing agreements are based on fixed count of resources billing rather than a utility model, and are therefore difficult to use in a cloud environment. Charging a markup on the utility bill seems to be a common approach among the cloud-literate but introduces a kink in the regular budgeting process and no one wants to write a variable but sizable blank check.
  3. Well the cloud scales… a lot. The instrumentation / monitoring technology has to be low enough overhead and be able to support hundreds of machines.

On the synthetic monitoring end there are plenty of options. We selected AlertSite, a distributed synthetic monitoring SaaS provider similar to Keynote and Gomez, for simplicity but in only provides high level performance and availability numbers for SLA management and “keep the lights are on” alerting. We also rely on CloudKick, another SaaS provider, for the system monitoring part. Here is a more detailed use case on the Cloudkick implementation.

For deeper instrumentation we have worked with OPNET to deploy their AppInternals Xpert (former Opnet Panorama) solution in the Amazon EC2 cloud environment. We have already successfully deployed AppInternals Xpert on our own server infrastructure to provide deep instrumentation and analysis of our applications. The cloud version looks very promising for tackling the technical challenges introduced by the cloud, and once fully deployed, we will have a lot of capability to harness the cloud scale and performance. More on this as it unfolds…

In summary, as vendors sell you the cloud but be prepared to tackle the traditional infrastructure concerns and stick to your guns. No, the cloud is not fixing your performance problems and is not going to magically scale for you. You will need some form of APM to help you there. Be on the lookout for providers and do challenge your existing partners to think of how they can help you. The APM space in the cloud is where traditional infrastructure was in 2005 but things are getting better.

To the cloud!!! (And do keep your performance engineers on staff, they will still be saving your bacon.)

Peco

Leave a comment

Filed under Cloud

JClouds “State of DevOps”

Hey, looks like I got quoted in Adrian Cole’s new “State of DevOps” presentation.  Some quick thoughts on current state and futures of DevOps from a bunch of important DevOps people and then little old me.

Leave a comment

Filed under DevOps

salesforce.com – Time To Start Caring?

So this week, salesforce.com (the world’s #4 fastest growing company, says Fortune Magazine) bought Ruby PaaS shop Heroku, announced database.com as a cloud database solution, and announced remedyForce, IT config management from BMC.

That’s quite the hat trick. salesforce.com has been the 900 lb gorilla in the closet for a while now; they’ve been hugely successful and have put a lot of good innovation on their systems but so far force.com, their PaaS solution, has been militantly “for existing CRM customers.” This seems like an indication of preparation to move into the general PaaS market and if they do I think they’ll be a force to reckon with – experience, money, and a proven track record of innovation. NI doesn’t use salesforce.com (“Too expensive” I’m told) so I’ve kept them on the back burner in terms of my attention but I’m guessing in 2011 they will come into the pretty meager PaaS space and really kick some ass.

Because for PaaS – what do we have really? Google App Engine, and in traditional Google fashion they pooped it out, put some minimum functionality on it, and wandered off. (We tried it once, determined that it disallowed half the libraries we needed to use, and gave up.) Microsoft Azure, which is really a hybrid IaaS/PaaS, you don’t get to ignore the virtual server aspect and have to do infrastructure stuff like monitor, scale instances, etc. yourself. And of course Heroku. And VMWare’s Cloud Foundry thing for Java, but VMWare is having a bizarrely hard time doing anything right in the cloud – talk about parlaying leadership in a nearby sector unsuccessfully. I have no idea why they’re executing so slowly, but they are.  Even that seems like Salesforce is doing it better, with VMForce (salesforce + vmware collaboration on Java PaaS).

In the end, most of us want PaaS – managing the plumbing is uninteresting, as long as it can manage itself well – but it’s hard, and no one has it nailed yet.  I hope salesforce.com does move into the general-use space; I’d hate for them to be buying up good players and only using them to advance their traditional business.

Quite the hat trick. salesforce.com has been the 900 lb gorilla in the closet for a while now; they’ve been hugely successful and have put a lot of good innovation on their systems but so far have been militantly “for existing CRM customers.” This seems like an indication of preparation to move into the general PaaS market and if they do I think they’ll be a force to reckon with – experience, money, and a proven track record of innovation. NI doesn’t use sf.com (“Too expensive” I’m told) so I’ve kept them on the back burner in terms of my attention but I’m guessing in 2011 they will come into the pretty meager PaaS space and really kick some ass.

Because for PaaS – what do we have really? Google App Engine, and in traditional Google fashion they pooped it out, put some minimum functionality on it, and wandered off. Azure, which is really hybrid IaaS, you don’t get to ignore the virtual server aspect and have to monitor, scale instances, etc. yourself. And of course Heroku. And VMWare’s Cloud Foundry thing for Java, but VMWare is having a bizarrely hard time doing anything right in the cloud – talk about parlaying leadership in a nearby sector unsuccessfully. Even that seems like Salesforce is doing it better, with VMForce (salesforce + vmware collab).

In the end, most of us want PaaS – managing the plumbing is uninteresting, as long as it can manage itself well – but it’s hard, and no one has it nailed yet.

Leave a comment

Filed under Cloud

Austin Cloud User Group Nov 17 Meeting Notes

This month’s ACUG meeting was cooler than usual – instead of having one speaker talk on a cloud-related topic, we had multiple group members do short presentation on what they’re actually doing in the cloud.  I love talks like that, it’s where you get real rubber meets the road takeaways.

I thought I’d share my notes on the presentations.  I’ll write up the one we did separately, but I got a lot out of these:

  1. OData to the Cloud, by Craig Vyal from Pervasive Software
  2. Moving your SaaS from Colo to Cloud, by Josh Arnold from Arnold Marziani (previously of PeopleAdmin)
  3. DevOps and the Cloud, by Chris Hilton from Thoughtworks
  4. Moving Software from On Premise to SaaS, by John Mikula from Pervasive Software
  5. The Programmable Infrastructure Environment, by Peco Karayanev and Ernest Mueller from National Instruments (see next post!)

My editorial comments are in italics.  Slides are linked into the headers where available.

OData to the Cloud

OData was started by Microsoft (“but don’t hold that against it”) under the Open Specification Promise.  Craig did an implementation of it at Pervasive.

It’s a RESTful protocol for CRUDdy GET/POST/DELETE of data.  Uses AtomPub-based feeds and returns XML or JSON.  You get the schema and the data in the result.

You can create an OData producer of a data source, consume OData from places that support it, and view it via stuff like iPhone/Android apps.

Current producers – Sharepoint, SQL Azure, Netflix, eBay, twitpic, Open Gov’t Data Initiative, Stack Overflow

Current consumers – PowerPivot in Excel, Sesame, Tableau.  Libraries for Java (OData4J), .NET 4.0/Silverlight 4, OData SDK for PHP

It is easier for “business user” to consume than SOAP or REST.  Craig used OData4J to create a producer for the Pervasive product.

Questions from the crowd:

Compression/caching?  Nothing built in.  Though normal HTTP level compression would work I’d think. It does “page” long lists of results and can send a section of n results at a time.

Auth? Your problem.  Some people use Oauth.  He wrote a custom glassfish basic HTTP auth portal.

Competition?  Gdata is kinda like this.

Seems to me it’s one part REST, one part “making you have a DTD for your XML”.  Which is good!  We’re very interested in OData for our data centric services coming up.

Moving your SaaS from Colo to Cloud

Josh Arnold was from PeopleAdmin, now he’s a tech recruiter, but can speak to what they did before he left.  PeopleAdmin was a Sungard type colo setup.  Had a “rotting” out of country DR site.

They were rewriting their stack from java/mssql to ruby/linux.

At the time they were spending $15k/mo on the colo (not including the cost of their HW).  Amazon estimated cost was 1/3 that but really they found out after moving it’s 1/2.  What was the surprise cost?  Lower than expected perf (disk io) forced more instances than physical boxes of equivalent “size.”

Flexible provisioning and autoscaling was great, the colo couldn’t scale fast enough.  How do you scale?

The cloud made having an out of country DR site easy, and not have it rot and get old.

Question: What did you lose in the move?  We were prepared for mental “control issues” so didn’t have those.  There’s definitely advanced functionality (e.g. with firewalls) and native hardware performance you lose, but that wasn’t much.

They evalled Rackspace and Amazon (cursory eval).  They had some F5s they wanted to use and the ability to mix in real hardware was tempting but they mainly went straight to Amazon.  Drivers were the community around it and their leadership in the space.

Timeline was 2 years (rewrite app, slowly migrate customers).  It’ll be more like 3-4 before it’s done.  There were issues where they were glad they didn’t mass migrate everyone at once.

Technical challenges:

Performance was a little lax (disk performance, they think) and they ended up needing more servers.  Used tricks like RAIDed EBSes to try to get the most io they could (mainly for the databases).

Every customer had a SSL cert, and they had 600 of them to mess with.  That was a problem because of the 5 Elastic IP limit.  Went to certs that allow subsidiary domains – Digicert allowed 100 per cert (other CAs limit to much less) so they could get 100 per IP.

App servers did outbound LDAP conns to customer premise for auth integration and they usually tried to allow those in via IP rules in their corporate firewalls, but now on Amazon outbound IPs are dynamic.  They set up a proxy with a static (elastic) Ip to route all that through.

Rightscale – they used it.  They like it.

They used nginx for the load balancing, SSL termination.  Was a single point of failure though.

Remember that many of the implementations you are hearing about now were started back before Rackspace had an API, before Amazon had load balancers, etc.

In discussion about hybrid clouds, the point was brought up a lot of providers talk about it – gogrid, opsource, rackspace – but often there are gotchas.

DevOps and the Cloud

Chris Hilton from Thoughtworks is all about the DevOps, and works on stuff like continuous deployment for a living.

DevOps is:

  • collaboration between devs and operations staff
  • agile sysadmin, using agile dev tools
  • dev/ops/qa integration to achieve business goals

Why DevOps?

Silos.  agile dev broke down the wall between dev/qa (and biz).

devs are usually incentivized for change, and ops are incentivized for stability, which creates an innate conflict.

but if both are incentivized to deliver business value instead…

DevOps Practices

  • version control!
  • automated provisioning and deployment (Puppet/chef/rPath)
  • self healing
  • monitoring infra and apps
  • identical environments dev/test/prod
  • automated db mgmt

Why DevOps In The Cloud?

cloud requires automation, devops provides automation

References

  • “Continuous Delivery” Humble and Farley
  • Rapid and Reliable Releases InfoQ
  • Refactoring Databases by Ambler and Sadalage

Another tidbit: they’re writing puppet lite in powershell to fill the tool gap – some tool suppliers are starting, but the general degree of tool support for people who use both Windows and Linux is shameful.

Moving Software from On Premise to SaaS

John Mikula of Pervasive tells us about the Pervasive Data Cloud.  They wanted to take their on premise “Data Integrator” product, basically a command line tool ($, devs needed to implement), to a wider audience.

Started 4 years ago.  They realized that the data sources they’re connecting to and pumping to, like Quickbooks Online, Salesforce, etc are all SaaS from the get go.   “Well, let’s make our middle part the same!”

They wrote a Java EE wrapper, put it on Rackspace colo initally.

It gets a customer’s metadata, puts it on a queue, another system takes it off and process it.  A very scaling-friendly architecture.  And Rackspace (colo) wasn’t scaling fast enough, so they moved it to Amazon.

Their initial system had 2 glassfish front ends, 25 workers

For queuing, they tried Amazon SQS but it was limited, then went to Apache Zookeeper

First effort was about “deploy a single app” – namely salesforce/quickbooks integration.  Then they made a domain specific model and refactored and made an API to manage the domain specific entities so new apps could be created easily.

Recommended approach – solve easy problems and work from there.  That’s more than enough for people to buy in.

Their core engine’s not designed for multitenancy – have batches of workers for one guy’s code – so their code can be unsafe but it’s in its own bucket and doesn’t mess up anyone else.

Changing internal business processes in a mature company was a challenge – moving from perm license model to per month just with accounting and whatnot was a big long hairy deal.

Making the API was rough.  His estimate of a couple months grew to 6.  Requirements gathering was a problem, very iterative.  They weren’t agile enough – they only had one interim release and it wasn’t really usable; if they did it again they’d do the agile ‘right thing’ of putting out usable milestones more frequently to see what worked and what people really needed.

In Closing

Whew!  I found all the presentations really engaging and thank everyone for sharing the nuts and bolts of how they did it!

Leave a comment

Filed under Cloud

Our First Cloud Product Released!

Hey all, I just wanted to take a moment to share with you that our first cloud-based product just went live!  LabVIEW Web UI Builder is National Instruments’ first SaaS application.  It’s actually free to use, go to ni.com and “Try It Now”, all you have to do is make an account.  It’s a freemium model, so you can use it, save your code, run it, etc. all you want; we charge to get the “Build & Deploy” functionality that enables you to compile, download, and deploy the bundled app to an embedded device or whatnot.

Essentially it’s a Silverlight app (can be installed out of browser on your box or just launched off the site) that lets you graphically program test & measurement, control, and simulation type of programs.  You can save your programs to the cloud or locally to your own machine.  The programs can interact via Web services with anything, but in our case it’s especially interesting when they interact with data acquisition devices.  There’s some sample programs on that page that show what can be done, though those are definitely tuned to engineers…  We have apps internally that let you play frogger and duck hunt, or do the usual Web mashup kinds of things calling google maps apis.  So feel free and try out some graphical programming!

Cool technology we used to do this:

And it’s 100% DevOps powered.  Our implementation team consists of developers and sysadmins, and we built the whole thing using an agile development methodology.  All our systems are created by model-driven automation from assets and definitions in source control.  We’ll post more about the specifics now that we’ve gotten version 1 done!  (Of course, the next product is just about ready too…)

Leave a comment

Filed under Cloud, DevOps

Cloud Security: a chicken in every pot and a DMZ for every service

There are a couple military concepts that have bled into technology and in particular into IT security, a DMZ being one of them. A Demilitarized Zone (DMZ) is a concept where there is established control for what comes in and what goes out between two parties and in military terms you can establish a “line of control” by using a DMZ. In a human-based DMZ, controllers of the DMZ make ingress (incoming) and egress (outgoing) decisions based on an approved list–no one is allowed to pass unless they are on the list and have proper identification and approval.

In the technology world, the same thing is done based on traffic between computers. Decisions to allow or disallow the traffic can be made based on where the traffic came from (origination), where it is going (destination) or even dimensions of the traffic like size, length, or even time of day. The basic idea being that all traffic is analyzed and is either allowed or disallowed based on determined rules and just like in a military DMZ, there is a line of control where only approved traffic is allowed to ingress or egress. In many instances a DMZ will protect you from malicious activity like hackers and viruses but it also protects you from other configuration and developer errors and can guarantee that your production systems are not talking to test or development tiers.

Lets look at a basic web tiered architecture. In a corporation that hosts its own website they will more than likely have the following four components: incoming internet traffic, a web server, a database server and an internal network. To create a DMZ or multiple DMZ instances to handle their web traffic, they would want to make sure that someone from the internet could only talk to the web server, they would also want to verify that only the web server can only talk to the database server and they would want to make sure that their internal network is inaccessible by the the web server, database server and the internet traffic.

Using firewalls, you would need to set up at least the below three firewalls to adequately control the DMZ instances:

1. A firewall between the external internet and the web server
2. A firewall in front of the internal network
3. A firewall between your web servers and database server

Of course these firewalls need to be set so that they allow (or disallow) only certain types of traffic. Only traffic that meets certain rules based on its origination, destination, and its dimensions will be allowed.

Sounds great, right? The problem is that firewalls have become quite complicated and now sometimes aren’t even advertised as firewalls, but instead are billed as a network-device-that-does-anything-you-want-that-can-also-be-a-firewall-too. This is due in part to the hardware getting faster, IT budgets shrinking and scope creep. The “firewall” now handles VPN, traffic acceleration, IDS duties, deep packet inspection and making sure your employees aren’t watching YouTube videos when they should be working. All of those are great, but it causes firewalls to be expensive, fragile and difficult to configure.  And to have the firewall watching all ingress and egress points across your network, you usually have to buy several devices to scatter throughout your topology.

Additionally, another recurring problem is that most firewall analysis and implementation is done with an intent to secure the perimeter. Which make sense, but it often stops there and doesn’t protect the interior parts of your network. Most IT security firms that do consulting and penetration tests don’t generally come through the “front door” by that I mean, they don’t generally try to get in through the front-facing web servers but instead they will go through other channels such as dial-in, wireless, partner, third-party services, social engineering, FTP servers or that demo system that was setup back 5 years ago that no hasn’t been taken down–you know the one I am talking about. Once inside, if there are no well-defined DMZs, then it is pretty much game-over because at that point there are no additional security controls. A DMZ will not fix all your problems, but it will provide an extra layer of protection that could protect you from malicious activity. And like I mentioned earlier it can also help prevent configuration errors crossing from dev to test to prod.

In short, a DMZ is a really good idea and should be implemented for every system that you have. The most optimal DMZ would be a firewall in front of each service that applies rules to determine what traffic is allowed in and out. This, however, is expensive to set up and very rarely gets implemented. That was the old days, this is now and good news, the cloud has an answer.

I am most familiar with Amazon Web Services so we will include an example of how do this with security groups from an AWS perspective. The following code creates a web server group and a database server group and allows the web server to talk to the database on port 3306 only.

ec2-add-group web-server-sec-group -d "this is the group for web servers" #This creates the web server group with a description
ec2-add-group db-server-sec-group -d "this is the group for db server" #This creates the db server group with a description
ec2-authorize web-server-sec-group -P tcp -p 80 -s 0.0.0.0/0 #This allows external internet users to talk to the web servers on port 80 only
ec2-authorize db-server-sec-group --source-group web-server-sec-group --source-group-user AWS_ACCOUNT_NUMBER -P tcp -p 3306 #This allows only traffic from the web server group on port 3306 (mysql) to ingress

Under the above example the database server is in a DMZ and only traffic from the web servers are allowed to ingress into it. Additionally, the web server is in a DMZ in that it is protected from the internet on all other ports except for port 80. If you were to implement this for every role in your system, you would be in effect implementing a DMZ between each layer and would provide excellent security protection.

The cloud seems to get a bad rap in terms of security. But I would counter that in some ways the cloud is more secure since it lets you actually implement a DMZ for each service. Sure, they won’t do deep packet analysis or replace an intrusion detection system, but they will allow you to specifically define what ingresses and egresses are allowed on each instance.  We may not ever get a chicken in every pot, but with the cloud you can now put a DMZ on every service.

Leave a comment

Filed under Cloud, Security

Austin Cloud Computing Users Group Meeting Sep 21

The next meeting of Austin’s cloud computing trailblazers is next Tuesday, Sep. 21.  Event details and signup are here.  Some gentlement from Opscode will be talking about cloud security, and then we’ll have our usual unconference-style discussions.  If you haven’t, join the group mailing list!  It’s free, you get fed, and you get to talk with other people actually working with cloud technologies.

Leave a comment

Filed under Cloud

Austin Cloud Computing Users Group!

The first meeting of the Austin Cloud Computing Users Group just happened this Tuesday, and it was a good time!  This new effort is kindly hosted by Pervasive Software [map].  We had folks from all over attend – Pervasive (of course), NI (4 of us went), Dell, ServiceMesh, BazaarVoice, Redmonk, and Zenoss just to name a few.  There are a lot of heavy hitters here in Austin because our town is so lovely!

We basically just introduced ourselves (there were like 50 people there so that took a while) and talked about organization and what we wanted to do.

The next meeting is planned already; it will be at Pervasive from 6:00 to 8:00 PM on Tuesday, August 24.  Michael Coté of Redmonk will be speaking on cloud computing trends.  Meeting format will be a presentation followed by lightning talks and self-forming unconference sessions.  Companies will be buying food and drink for the group in return for a 5 minute “pimp yourself” slot.  Mmm, free dinner.

There is a Google group/mailing list you can join – austin-cug@googlegroups.com.  There’s already some good discussion underway, so join in, and come to the next meeting!

Leave a comment

Filed under Cloud, DevOps

Velocity 2010 – Performance Indicators In The Cloud

Common Sense Performance Indicators in the Cloud by Nick Gerner (SEOmoz)

SEOmoz has been  EC2/S3 based since 2008.  They scaled from 50 to 500 nodes.  Nick is a developer who wanted him some operational statistics!

Their architecture has many tiers – S3, memcache, appl, lighttpd, ELB.  They needed to visualize it.

This will not be about waterfalls and DNS and stuff.  He’s going to talk specifically about system (Linux system) and app metrics.

/proc is the place to get all the stats.  Go “man proc” and understand it.

What 5 things does he watch?

  • Load average – like from top.  It combines a lot of things and is a good place to start but explains nothing.
  • CPU – useful when broken out by process, user vs system time.  It tells you who’s doing work, if the CPU is maxed, and if it’s blocked on IO.
  • Memory – useful when broken out by process.  Free, cached, and used.  Cached + free = available, and if you have spare memory, let the app or memcache or db cache use it.
  • Disk – read and write bytes/sec, utilization.  Basically is the disk busy, and who is using it and when?  Oh, and look at it per process too!
  • Network – read and write bytes/sec, and also the number of established connections.  1024 is a magic limit often.  Bandwidth costs money – keep it flat!  And watch SOA connections.

Perf Monitoring For Free

  1. data collection – collectd
  2. data storage- rrdtool
  3. dashboard management – drraw

They put those together into a dashboard.  They didn’t want to pay anyone or spend time managing it.  The dynamic nature of the cloud means stuff like nagios have problems.

They’d install collectd agents all over the cluster.  New nodes get a generic config, and node names follow a convention according to role.

Then there’s a dedicated perf server with the collectd server, a Web server, and drraw.cgi.  In a security group everyone can connect in to.

Back up your performance data- it’s critical to have history.

Cloudwatch gives you stuff – but not the insight you have when breaking out by process.  And Keynote/Gomez stuff is fine but doesn’t give you the (server side) nitty gritty.

More about the dashboard. Key requirements:

  • Summarize nodes and systems
  • Visualize data over time
  • Stack measurements per process and per node
  • Handle new nodes dynamically w/o config chage

He showed their batch mode dashboard.  Just a row per node, a metric graph per column.  CPU broken out by process with load average superimposed on top.  You see things like “high load average but there’s CPU to spare.”  Then you realize that disk is your bottleneck in real workloads.  Switch instance types.

Memory broken out by process too.  Yay for kernel caching.

Disk chart in bytes and ops.  The steady state, spikes, and sustained spikes are all important.

Network – overlay the 95th percentile cause that’s how you get billed.

Web Server dashboard from an API server is a little different.

Add Web requests by app/request type.  app1, app2, 302, 500, 503…  You want to see requests per second by type.

mod_status gives connections and children idleness.

System wide dashboard.  Each graph is a request type, then broken out by node.  And aggregate totals.

And you want median latency per request.  And any app specific stuff you want to know about.

So get the basic stats, over time, per node, per process.

Understand your baseline so you know what’s ‘really’ a spike.

Ad hoc tools -try ’em!

  • dstat -cdnml for system characteristics
  • iotop for per process disk IO
  • iostat -x 3 for detailed disk stats
  • netstat -tnp for per process TCP connection stats

His slides and other informative blog posts are at nickgerner.com.

A good bootstrap method… You may want to use more/better tools but it’s a good point that you can certainly do this amount for free with very basic tooling, so something you pay for best be better! I think the “per process” intuition is the best takeaway; a lot of otherwise fancy crap doesn’t do that.

But in the end I want more – baselines, alerting, etc.

Leave a comment

Filed under Cloud, Conferences, DevOps