Monthly Archives: February 2011

Amazon CloudFormation: Model Driven Automation For The Cloud

You may have heard about Amazon’s newest offering they announced today, CloudFormation.  It’s the new hotness, but I see a lot of confusion in the Twitterverse about what it is and how it fits into the landscape of IaaS/PaaS/Elastic Beanstalk/etc. Read what Werner Vogels says about CloudFormation and its uses first, but then come back here!

Allow me to break it down for you and explain why this is such a huge leverage point for cloud developers.

What Has Come Before

Up till now on Amazon you could configure up a single virtual image the way you wanted it, with an AMI. You could even kind of construct a scalable tier of similar systems using Auto Scaling, by defining Launch Configurations. But if you wanted to construct an entire multitier system it was a lot harder.  There are automated configuration management tools like chef and puppet out there, but their recipes/models tend to be oriented around getting a software loadout on an existing system, not the actual system provisioning – in general they come from the older assumption you have someone doing that on probably-physical systems using bcfg2 or cobber or vagrant or something.

So what were you to do if you wanted to bring up a simple three tier system, with a Web tier, app server tier, and database tier?  Either you had to set them up and start them manually, or you had to write code against the Amazon APIs to explicitly pull up what you wanted. Or you had to use a third party provisioning provider like RightScale or EngineYard that would let you define that kind of model in their Web consoles but not construct your own model programmatically and upload it. (I’d like my product functionality in my own source control and not your GUI, thanks.)

Now, recently Amazon launched Elastic Beanstalk, which is more way over on the PaaS side of things, similar to Google App Engine.  “Just upload your application and we’ll run it and scale it, you don’t have to worry about the plumbing.” Of course this sharply limits what you can do, and doesn’t address the question of “what if my overall system consists of more than just one Java app running in Beanstalk?”

If your goal is full model driven automation to achieve “infrastructure as code,” none of these solutions are entirely satisfactory. I understand CloudFormation deeply because we went down that same path and developed our own system model ourselves as a response!

I’ll also note that this is very similar to what Microsoft Azure does.  Azure is a hybrid IaaS/PaaS solution – their marketing tries to say it’s more like Beanstalk or Google App Engine, but in reality it’s more like CloudFormation – you have an XML file that describes the different roles (tiers) in the system, defines what software should go on each, and lets you control the entire system as a unit.

So What Is CloudFormation?

Basically CloudFormation lets you model your Amazon cloud-based system in JSON and then provision and control it as a unit.  So in our use case of a three tier system, you would model it up in their JSON markup and then CloudFormation would understand that the whole thing is a unit.  See their sample template for a WordPress setup. (A mess more sample templates are here.)

Review the WordPress template; it lets you define the AMIs and instance types, what the security group and ELB setups should be, the RDS database back end, and feed in variables that’ll be used in the consuming software (like WordPress username/password in this case).

Once you have your template you can tell Amazon to start your “stack” in the console! It’ll even let you hook it up to a SNS notification that’ll let you know when it’s done. You name the whole stack, so you can distinguish between your “dev” environment and your “prod” environment for example, as opposed to the current state of the Amazon EC2 console where you get to see a big list of instance IDs – they added tagging that you can try to use for this, but it’s kinda wonky.

Why Do I Want This Again?

Because a system model lets you do a number of clever automation things.

Standard Definition

If you’ve been doing Amazon yourself, you’re used to there being a lot of stuff you have to do manually.  From system build to system build even you do it differently each time, and God forbid you have multiple techies working on the same Amazon system. The basic value proposition of “don’t do things manually” is huge.  You configure the security groups ONCE and put it into the template, and then you’re not going to forget to open port 23 AGAIN next time you start a system. A core part of what DevOps is realizing as its value proposition is treating system configuration as code that you can source control, fix bugs in and have them stay fixed, etc.

And if you’ve been trying to automate your infrastructure with tools like Chef, Puppet, and ControlTier, you may have been frustrated in that they address single systems well, but do not really model “systems of systems” worth a damn.  Via new cloud support in knife and stuff you can execute raw “start me a cloud server” commands but all that nice recipe stuff stops at the box level and doesn’t really extend up to provisioning and tracking parts of your system.

With the CloudFormation template, you have an actual asset that defines your overall system.  This definition:

  • Can be controlled in source control
  • Can be reviewed by others
  • Is authoritative, not documentation that could differ from the reality
  • Can be automatically parsed/generated by your own tools (this is huge)

It’s also nicely transparent; when you go to the console and look at the stack it shows you the history of events, the template used to start it, the startup parameters it used… Moving away from the “mystery meat” style of system config.

Coordinated Control

With CloudFormation, you can start and stop an entire environment with one operation. You can say “this is the dev environment” and be able to control it as a unit. I assume at some point you’ll be able to visualize it as a unit, right now all the bits are still stashed in their own tabs (and I notice they don’t make any default use of their own tagging, which makes it annoying to pick out what parts are from that stack).

This is handy for not missing stuff on startup and teardown… A couple weeks ago I spent an hour deleting a couple hundred rogue EBSes we had left over after a load test.

And you get some status eventing – one of the most painful parts of trying to automate against Amazon is the whole “I started an instance, I guess I’ll sit around and poll and try to figure out when the damn thing has come up right.”  In CloudFront you get events that tell you when each part and then the whole are up and ready for use.

What It Doesn’t Do

It’s not a config management tool like Chef or Puppet. Except for what you bake onto your AMI it has zero software config capabilities, or command dispatch capabilities like Rundeck or mcollective or Fabric. Although it should be a good integration point with those tools.

It’s not a PaaS solution like Beanstalk or GAE; you use those when you just have an app you want to deploy to something that’ll run it.  Now, it does erode some use cases – it makes a middle point between “run it all yourself and love the complexity” and “forget configurable system bits, just use PaaS.”  It allows easy reusability, say having a systems guy develop the template and then a dev use it over and over again to host their app, but with more customization than the pure-play PaaSes provide.

It’s not quite like OVF, which is more fiddly and about virtually defining the guts of a single machine than defining a set of systems with roles and connections.

Competitive Analysis

It’s very similar to Microsoft Azure’s approach with their .cscfg and .csdef files which are an analogous XML model – you really could fairly call this feature “Amazon implements Azure on Amazon” (just as you could fairly call Elastic Beanstalk “Amazon implements Google App Engine on Amazon”.) In fact, the Azure Fabric has a lot more functionality than the primitive Amazon events in this first release. Of course, CloudFormation doesn’t just work on Windows, so that’s a pretty good width vs depth tradeoff.

And it’s similar to something like a RightScale, and ideally will encourage them to let customers actually submit their own definition instead of the current clunky combo of ServerArrays and ServerTemplates (curl or Web console?  Really? Why not a model like this?). RightScale must be in a tizzy right now, though really just integrating with this model should be easy enough.

Where To From Here?

As I alluded, we actually wrote our own tool like this internally called PIE that we’re looking to open source because we were feeling this whole problem space keenly.  XML model of the whole system, Apache Zookeeper-based registry, kinda like CloudFormation and Azure. Does CloudFormation obsolete what we were doing?  No – we built it because we wanted a model that could describe cloud systems on multiple clouds and even on premise systems. The Amazon model will only help you define Amazon bits, but if you are running cross-cloud or hybrid it is of limited value. And I’m sure model visualization tools will come, and a better registry/eventing system will come, but we’re way farther down that path at least at the moment. Also, the differentiation between “provisioning tools” that control and start systems like CloudFormation and bcfg2 and “configuration” tools that control and start software like Chef and Puppet (and some people even differentiate between those and “deploy” tools that control and start applications like Capistrano) is a false dichotomy. I’m all about the “toolchain” approach but at some point you need a toolbelt. This tool differentiation is one of the more harmful “Dev vs Ops” differentiations.

I hope that this move shows the value of system modeling and helps people understand we need an overarching model that can be used to define it all, not just “Amazon” vs “Azure” or “system packages” vs “developed applications” or “UNIX vs Windows…” True system automation will come from a UNIVERSAL model that can be used to reason about and program to your on premise systems, your Amazon systems, your Azure systems, your software, your apps, your data, your images and files…

Conclusion

You need to understand CloudFormation, because it is one of the most foundational changes that will have a lot of leverage that AWS has come out with in some time. I don’t bother to blog about most of the cool new AWS features, because they are cool and I enjoy them but this is part of a more revolutionary change in the way systems are managed, the whole DevOps thing.

7 Comments

Filed under Cloud, DevOps

Scrum for Operations: Order from Chaos

Welcome to the second installment in Scrum for Operations, a series where I talk about (and go through) the process of doing systems work as part of a DevOps team according to the Scrum methodology. Last time, I introduced the basics of Scrum as it generally is used, and its key benefit of frequently delivering useful functionality. But I already hear the objections – “How can that turn out all right?” It is so light on process that one’s initial inclination is to dismiss it as “cowboy coding”, and we already know not to be “cowboy sysadmins,” right? One’s intuition might be (and mine was in the beginning, I’ll be honest) that this would lead to a metastable process that could not be sustainable without fundamental fatal flaws overtaking it.

Well, as I learned after trying to learn more and kicking the tires with our dev team here, there are several core disciplines that are agile’s saving graces.

Testing

We ops guys are used to testing being a neglected afterthought in the development process, often tossed over the wall to a QA team that isn’t well integrated into the product. Therefore we have a hard time trusting code that’s being handed over to us given our experience – we get it handed to us and it doesn’t work!

Well, agile pretty much understands that without pervasive testing, this kind of fast cycle process is doomed. At its extreme, some practitioners use Test Driven, aka Test First, development where failing tests must be written first and then code filled in behind it till the test passes. This creates a large inherent test framework.

Even agile groups that don’t do this almost always have metrics on unit test coverage and a required bar devs must hit.  Here, our desktop software group that’s newly using Scrum has the mandate that “there must be XX% unit test code coverage or you’re not ready to ship.”

Similarly, acceptance testing (automated continuous testing of user stories vs the code) is a common part of agile. Continuous ongoing testing ensures quality through the dev cycle and reduces the need for time-intensive, and mysteriously always insufficient, big-cycle regression testing.

This is a great culture. And there’s all kinds of different tests – unit test, integration/functional/regression testing, performance testing, fault testing… Starting to get interesting to you?  How about monitoring? In reality application monitoring is a special case of testing – it’s a “lightweight integration regression test.” Our initial approach to DevOps includes making test coverage goals for things like monitors and performance testing, because that plugs into the existing agile mindset well.

Bonus new terminology thing – the quick acceptance test you do upon release, which we always called “critical path testing,” is now being called “smoke testing” by the hip. Update your dictionaries!

A side note on formal QA groups. Just as we are working on DevOps, there has been previous work on how QA teams interact with agile dev teams, and there are a variety of different doctrines on how to split the work – often, it’s devs that are responsible for a lot of the testing. It’s a hard balance – you want the devs to be responsible for some of the testing because the best testing is “close to the code,” but just like with Ops, a real QA team has expertise beyond what a developer can just bolt in with 10% of their attention. Here, we have a dev team and also a remote QA team; devs test their own code on the daily build and then there’s a weekly push to a more stable environment where the QA team does acceptance testing and is moving into performance testing and the like.

Anyway, this endemic focus on testing and automation of testing and testing metrics is the pin that makes this agile flywheel actually turn without just flying off. (You are correct, some agile teams don’t do this – we call those “the unsuccessful ones.”)

And this is for you to do as well! There’s a whole post or series of posts in the topic “What does a unit test mean for something infrastructurey” – it is incumbent on you to figure it out and also have high test coverage with your work.

Refactoring

In general, agile dev is the epitome of horizon planning. You know you can’t get all the requirements ahead of time (or if you do get them all ahead of time, what you come out with won’t serve any real human’s need) and similarly preplanned architecture and design often doesn’t survive contact with the scrum. So it’s not “don’t plan or design,” but it’s “plan and design in an ongoing manner.”

This is one of the scariest parts for an ops person – we assume that we get one “bite at the apple”, and once we’ve set up the systems and let in the developers, we’ll never be allowed to change anything without a fight.  But developers have this problem internally all the time – one dev is working on a core library or API that other developers are using, and they don’t wait for core guy to get done before they start. Instead, they have adopted a concept they call refactoring. Refactoring just means that each sprint, you are open to redoing fundamental stuff that needs to change (or that you realize you did kinda ghetto in the first place).

Because this is an accepted part of the iterative approach, you get to leverage this as well.First iteration they get the basic Tomcat and mySQL install out of the repo, and they can get started – and then in the second iteration you front it with Apache, or tune the DB for security, or whatnot and they have to make some changes to fit. I’m not promising no one will ever cry about this, but it’s a part of what makes the culture successful so it’s there for you to leverage.

And for you to adhere to! Be open to refactoring your infrastructure based on the emerging project needs.

Source Control

A developer might not even mention this, and most books on agile don’t, because to them it’s so fundamental a discipline that it’s like breathing air. Sadly the same can not be said of Ops folks, so I’m mentioning it. When code is changed, it is in a shared source control repository – which gives other people on the group visibility into it (a collaboration touchpoint), is a common place to source it from (a deployment touchpoint), and can be used to easily manage multiple versions, even experimental ones, and merge or roll back changes.

This is the most fundamental empowering technology of modern software development (not just agile) and you must uptake it immediately or you have lost.  Fair warning. It is the stepping stone that will allow subsequent Cool DevOps Automation to happen.

Conclusion

These three disciplines convert agile/Scrum from dangerous free-for-all to a new technique that gets your product done both more quickly and with higher quality than a waterfall method. I’ll talk further next time about how Ops slots into all this, and how you can fit your systems admin work into a Scrum mindset.

Also note, there’s some other agile disciplines surrounding agile design and encapsulation and patterns and whatnot, which I don’t understand well enough yet to speak authoritatively on. Feel free and chime in with other core disciplines if you are!

Leave a comment

Filed under DevOps

GeekAustin DevOps #1 – Puppet, Chef, bcfg2, but no Dev

All three Agile Admins were at this Austin event last Saturday; GeekAustin had a set of back to back presentations on puppet, chef, and bcfg2 downtown at Elysium. A good crowd was there, maybe 50 people.

Matt Ray presented on chef, and has posted his slides. Jeff McCune spoke on Puppet and Sol Jerome spoke on bcfg2, I’ll link their slides here whenever I become aware of them being posted.

I had to leave before bcfg2, but the chef and puppet presentations were interesting.  One of the main audience comments was “these are pretty similar,” which is true. Both are real good.

Probably the one problem with the event was that it wasn’t really DevOps.  It was just tool presentations for sysadmins.  Our developers took one look and didn’t come; we met one or two developers at the event that weren’t getting a lot out of it.

It’s fine to have an event looking at these tools – but I want to caution the DevOps community that the value of DevOps is the different approach to life. I think the excellent dev2ops post on “DevOps is not a technology problem, DevOps is a business problem” explains this perfectly. PEOPLE over PROCESS over TOOLS. If you don’t have engagement from the developers, you don’t have DevOps, no matter what whizbang gadgets you have.  Frankly I’d like to see the tool vendors address this in their presentations. A little bit of “you’re a sysadmin, this saves you work, isn’t it cool” is fine but how about talking about how this can be used to collaborate with the developers?  How does that model work?

At DevOpsDays US I warned about adopting new tools without adopting new tactics, using the example of the deployment of the Minie ball to muskets used in the American Civil War.  Greater accuracy and range, but everyone still just stood their in lines and charged in formation, and the slaughter was profound.  You may have a shiny new weapon, but if you are just using it back in your dark sysadmin cave, the problems that beset you will never go away. You’ll still be the bottom of the food chain, only taken seriously when someone can’t get their email.

I know the guys at Puppet Labs, Opscode, etc. are all big into DevOps – but I guess I’d like to see the tool presentations and value props speak to developers as well somehow. What would that look like?  Ideas?

6 Comments

Filed under DevOps

Scrum for Operations: What Is Scrum

Agile

It’s not a mandatory part of DevOps, but I believe that DevOps works a lot better if operations teams adopt Agile.  But all that most systems teams know about Agile is that “it’s that thing that makes the development teams not plan worth a damn any more.”  Well, though there may be some truth to that, a well run agile process is very effective and not uncontrolled at all. Constructing and maintaining infrastructure in an agile manner is very possible – we’ve done it. In the beginning it seems daunting, just as it did initially to the legions of software developers who were steeped in waterfall based thinking, but once you adopt it I think you’ll see a lot of traditional pain points get a lot better – that was our experience. This is the first in a series of posts about using Scrum for Web operations, and I thought I’d start with explaining what Scrum is from an ops guy’s point of view.

Scrum

Scrum is one of the more popular agile methodologies.  Agile was initially defined by the Agile Manifesto and from there got turned into more specific implementations of its principles, methods, and practices, and Scrum is one of those more specific prescriptions on how to “do” agile. Other methodologies have been used for DevOps – like check out this great presentation on how Stephen Nelson-Smith (@LordCope) did a XP DevOps implementation at a British government agency.

While developing our first two products that we delivered on the cloud using DevOps, we used an “agile-ish” methodology, which is to say not a formal agile approach.  This time, we’ve decided to run Scrum by the book, not just for the feature developers but for our systems engineers, operations staff, security engineers, and system automation developers. Some folks are talking about hybrid models like Scrumban (Scrum + Kanban/Lean) to better incorporate ops work, but we’re going to start with Scrum first and see where we get to.

As a system administrator, that’s scary, because we don’t know Scrum. But Scrum is pretty darn simple.

Here’s two short videos that give a pretty good intro to Scrum.

Scrum in Under 10 Minutes, courtesy Axosoft:

Scrum Basics:

(P. S. Scrum master lady from this video… Call me!)

But if like me you think videos are not an efficient method of conveying information, here’s the short form.

The Team

In Scrum, you form a small crossfunctional team to all work together on a product instead of having to cross organizational boundaries and fill out forms to get any work done.  The roles consist of:

  • product owner – the “business guy” who has say over what features get greenlit and when
  • scrum master – the project manager who keeps everyone on track
  • team – ideally 5 to 7 developers writing code (this is where Ops will plug in, in DevOps)
  • testers, security, other folks – probably should be included too, but adoption of that varies

Of course in a mid to large sized organization there are other stakeholders, like customers and management and legal and whatnot. But you form a team that has everything it needs to complete its work.

The Backlog

All needed features are brainstormed and put into a master list of features called a “product backlog.” This backlog contains everything – including your systems tasks. The backlog is then broken up into smaller chunks, like release backlogs (features targeted for a specific release) and sprint backlogs (specific tasks for a sprint). The sprint backlog is basically your work breakdown structure, if you’re more comfortable with that terminology, except that the tasks are worded more in terms of what feature they provide instead of in terms of what specific things you need to do; this fosters communication with the product owner.

The developers (and, ideally, operations staff)  on the team help generate the backlog and provide time estimates in hours for each task. The product owner owns the prioritization and ordering (as constrained by things that are actual dependencies).

The Sprint

Work is performed in month-long iterations called “sprints.” Requirements are frozen for the sprint and the development is time-boxed – it must end at the termination of the sprint. At the end of the sprint, whatever features were put in that sprint should be complete and “ready to ship.”

The Standup

As the scrum, or sprint, progresses, there are daily “standups” – 15 minute meetings where everyone stands up, reports what they have done since the last standup, what they plan to accomplish by the next standup, and any roadblocks they are encountering. By keeping this meeting short it doesn’t waste time, but by having an actual face to face meeting you get very rapid and effective collaboration that cannot be achieved via managing project plans, sending emails, generating status reports, or the like.  It cuts out all the busywork and keeps the kernel of coordination that lets a team keep up velocity.

The Burndown

The progress of the team against the sprint backlog is tracked by a “burndown chart.” As each team member completes their tasks for the sprint you can easily see whether you are on track for successful completion or not.

And that’s Scrum in a nutshell.  Five things. Small integrated team, backlog, sprints, standups, burndown.

Next Time

I’ll talk about each of these areas more in depth later in the series, as we go through them ourselves as we develop a real product, and explain how a system administrator (aka systems engineer, infrastructure admin, operations ninja) can fit their work into this structure. But first, I will explain why Agile/Scrum is not just “crazy talk.” To a hardened system admin, or really to anyone used to working in a waterfall environment, it is very counterintuitive that this approach doesn’t just degenerate into the IT equivalent of orcs pillaging a city. But agile has several interesting practices that make it work, and should be very interesting to an operations person.

4 Comments

Filed under DevOps

Dev vs Ops vs Sec vs Mgmt

I was just reading this interesting InfoWorld post on The Most Common Turf Wars in IT – very relevant to what we talk about around here. It’s mainly about stakeholders not properly integrated into the product/project/release planning process. Their most common turf wars are so true:

“Ops vs. Dev” – A cry for DevOps. One of the reasons we have this blog.

“Admin on Admin” – And a good bit of the challenge with that is that admins themselves are a bit grumpy by nature and need them a little “OpsOps.” Being chronically marginalized and given a vision of “contain costs!” is to blame, but it means we sysadmins have a little more personal-skills work to do in order to go from BOFH to team collaborator.

“Security vs Everyone” – Bringing security to the DevOps table can be challenging; if there’s one group more elitist and cynical than sysadmins, it’s security professionals. The job obviously requires a little bit of that mindset. But there’s some movement in the security community to understand the need for collaboration in a DevOps style.

“Management vs. Staff” – Well, that’s more of a question for the ages.

There are others, but these are definitely the “big four” in my experience.  What do y’all think?

Leave a comment

Filed under DevOps

Inside Microsoft Azure

Recently, I delivered a presentation at the Austin Cloud User Group introducing them to Microsoft Azure.  I’m a UNIX bigot and have been doing the Amazon Cloud and open source thing, but we are delivering a product via Azure next so our team is learning it.  It’s actually quite interesting and has a number of good points; it’s mainly hindered by the Microsoft marketing message trying to pretend it’s all magical fairy dust instead of clearly explaining what it is and what it can do. So if you want to hear what Azure is in straight shooting UNIX admin speak, check it out!

1 Comment

Filed under Cloud

Application Performance Management in the Cloud

Cloud computing has been the buzz and hype lately and everybody is trying to understand what it is and how to use it. In this post, I wanted to explore some of the properties of “the cloud” as they pertain to Application Performance Management. If you are new to cloud offerings, here are a few good materials to get started, but I will assume the reader of this post is somewhat familiar with cloud technology.

As Spiderman would put it, “With great power comes great responsibility” … Cloud abstracts some of the infrastructure parts of your system and gives you the ability to scale up resource on demand. This doesn’t mean that your applications will magically work better or understand when they need to scale, you still need to worry about measuring and managing the performance of your applications and providing quality service to your customers. In fact, I would argue APM is several times more important to nail down in a cloud environment for several reasons:

  1. The infrastructure your apps live in is more dynamic and volatile. For example, cloud servers are VMs and resources given to them by the hypervisor are not constant, which will impact your application. Also, VMs may crash/lock up and not leave much trace of what was the cause of issues with your application.
  2. Your applications can grow bigger and more complex as they have more resources available in the cloud. This is great but it will expose bottlenecks that you didn’t anticipate. If you can’t narrow down the root cause, you can’t fix it. For this, you need scalable APM instrumentation and precise analytics.
  3. On-demand scaling is a feature that is touted by all cloud providers but it all hinges on one really hard problem that they won’t solve for you: Understanding your application performance and workload behave so that you can properly instruct the cloud scaling APIs what to do. If my performance drops do I need more machines? How many? APM instrumentation and analytics will play a key part of how you can solve the on-demand scaling problem.

So where is APM for the cloud, you ask? It is in its very humble beginnings as APM solution providers have to solve the same gnarly problems application developers and operations teams are struggling with:

  1. Cloud is dynamic and there are no fixed resources. IP addresses, hostnames, and MAC addresses change, among other things. Properly addressing, naming and connecting the machines is a challenge – both for deep instrumentation agent technology and for the apps themselves.
  2. Most vendors’ licensing agreements are based on fixed count of resources billing rather than a utility model, and are therefore difficult to use in a cloud environment. Charging a markup on the utility bill seems to be a common approach among the cloud-literate but introduces a kink in the regular budgeting process and no one wants to write a variable but sizable blank check.
  3. Well the cloud scales… a lot. The instrumentation / monitoring technology has to be low enough overhead and be able to support hundreds of machines.

On the synthetic monitoring end there are plenty of options. We selected AlertSite, a distributed synthetic monitoring SaaS provider similar to Keynote and Gomez, for simplicity but in only provides high level performance and availability numbers for SLA management and “keep the lights are on” alerting. We also rely on CloudKick, another SaaS provider, for the system monitoring part. Here is a more detailed use case on the Cloudkick implementation.

For deeper instrumentation we have worked with OPNET to deploy their AppInternals Xpert (former Opnet Panorama) solution in the Amazon EC2 cloud environment. We have already successfully deployed AppInternals Xpert on our own server infrastructure to provide deep instrumentation and analysis of our applications. The cloud version looks very promising for tackling the technical challenges introduced by the cloud, and once fully deployed, we will have a lot of capability to harness the cloud scale and performance. More on this as it unfolds…

In summary, as vendors sell you the cloud but be prepared to tackle the traditional infrastructure concerns and stick to your guns. No, the cloud is not fixing your performance problems and is not going to magically scale for you. You will need some form of APM to help you there. Be on the lookout for providers and do challenge your existing partners to think of how they can help you. The APM space in the cloud is where traditional infrastructure was in 2005 but things are getting better.

To the cloud!!! (And do keep your performance engineers on staff, they will still be saving your bacon.)

Peco

Leave a comment

Filed under Cloud

Innotech Austin 2010

I went to the local Austin annual IT convention, Innotech, a while back.   No, it’s not a coincidence that it sounds like the company from Office Space.

It was pretty good, at least for a couple hour visit.  It’s somewhat disappointing that more of the Austin-based tech companies don’t show up to recruit if nothing else… All the show floor is little consulting companies and printer vendors, no Zenoss/BazaarVoice/HomeAway/etc.  Although there were an interestingly large number of booths around “helping startups” in general –

I went to two sessions.  The first was the Beta Summit, where you get 10 minute pitches from some of the hot new Austin startups about what they’re doing.

First up was Matt Curtain of Socialsmack. Yelp/fb/five star ratings are pointless for brands, so they’ve come up with a “props/drops” rating system people can do for them as well as ask questions and rate answers. It’s kinda stack exchangey if there was a “Random Consumer Brands Stack Exchange.” You can think of it as “Bazaarvoice lite.” They did one for Kona Grill in the Domain that got onto the news. Seems like a fine concept, the question is “why would I want to go use it.”  Seems not quite focused enough.  Like Stack Exchange, maybe a “cars Socialsmack” et al. would have enough focus to bring people?

Chad Ferrell of Recyclematch talked about their site, which matches up things people have and want to recycle with people that want them.  It’s “Homeaway for trash.” Or more so than Craigslist, anyway.  Seems like a good play into the green space.

Next up was Ricochet Labs! Who hasn’t played Qrank on the iPhone, it’s a sweet game.  Fascinatingly, they are not a game company.  Rodney Gibbs says they are developing a location based social platform to target verticals and Qrank was just like a demo proof of concept.
They expect that the OS will own “location checkin” eventually, instead of it being something 200 apps all provide. They are a cloud-based SaaS model using a distributed SOA deployment. Next on their plate is Yelp integration, and then they want to add:

  • Content channels
  • Offers/redemptions
  • Platforms

I have to say I love Qrank and these guys seem like they know what they’re doing.

Eric Katerman introduced Hurricane Party, another iPhone app that lets people define ongoing parties for people to come to, it makes little hurricane icons on the map that show magnitude of the party.  They hope to parlay it into locations providing group deals.  So it’s like a flash mob for partyin’. I put the app on my phone but haven’t gone to a party yet – they only really happen in Austin (I was bored in Houston one day but no luck).

Next up was Workstreamer. They collect/analyze/deliver info on businesses off social media and whatnot to perform “many to many brand analysis.” Seems like there’s a metric assload of all these “evaluate your brand by grepping twitter” plays, we’ll see which ones excel and survive.

Finally we had the HBMG Vector. I am torn on this.  It’s supposed to be a private cloud-in-a-box.  The presentation was very 1980s though and it seemed like an old school consulting company that has some frankly not very aligned products.

Then I went to a presentation on “IBM Smart Planet,” as it seems relevant to what we do at NI. The premise is that the world is becoming “Instrumented, interconnected, intelligent.” He talked about partners like Johnson Controls, Eaton, and Siemens in doing this, and noted that just the average building nowadays is kicking out a lot of data.  I agree with all this but there weren’t many good takeaways or new insights.

Leave a comment

Filed under Conferences

Hello from Strata!

Two of the Agile Admins, Peco and Ernest, are at the new Strata conference in San Jose this week. It’s about “Data Science” and “Big Data” – the confluence of the NoSQL movement, cloud computing, and the Petabyte Age.  We now have the ability to gather more data than ever before, and even process it effectively, and this will be transformative to business and society.

We’ll be bringing you interesting things we find out from the conference, inasmuch as the shaky wireless allows.

Yesterday, we attended a variety of tutorials, and I’m sitting in the keynotes right now on the first day of the “main” conference.  You can follow along with the keynotes at strataconf.com/live and most presenters are getting their slides and materials up on the site as well. It’s been good so far, stand by for more!

Leave a comment

Filed under Conferences