Tag Archives: Cloud

Velocity 2010 – Drizzle

Monty Taylor from Rackspace talked about Drizzle, a MySQL variant “built for operations“. My thoughts will be in italics so you can be enraged at the right party.

Drizzle is “a database for the cloud”.  What does that even mean?  It’s “the next Web 2.0”, which is another away of saying “it’s the new hotness, beeyotch” (my translation).

mySQL scaling to multiple machines brings you sadness.  And mySQL deploy is crufty as hell.  So step 1 to Drizzle recovery is that they realized “Hey, we’re not the end all be all of the infrastructure – we’re just one piece people will be putting into their own structure.”  If only other software folks would figure that out…

Oracle style vertical scaling is lovely using a different and lesser definition of scaling.  Cloud scaling is extreme!  <Play early 1990s music>  It requires multiple machines.

They shard.  People complain about sharding, but that’s how the Internet works – the Internet is a bunch of sites sharded by functionality.  QED.

“Those who don’t know UNIX are doomed to repeat it.”  The goal (read about the previous session on toolchains) is to compose stuff easily, string them together like pipes in UNIX.  But most of the databases still think of themselves as a big black box in the corner, whose jealous priests guard it from the unwashed heathen.

So what makes Drizzle different? In summary:

  • Less features
  • Ops driven
  • Sane config
  • Plugins

Less features means less ways for developers to kill you.  Oracle’s “run Java within the database” is an example of totally retarded functionality whose main job is to ruin your life. No stored procedures, no triggers, no prepared statements.  This avoids developer sloppiness.  “Insert a bunch of stuff, then do a select and the database will sort it!” is not appropriate thinking for scale.

Ops driven means not marketing driven, which means driven by lies.  For example, there’s no marketdroids that want them to add a mySQL event scheduler when cron exists.  Or “we could sell more if we had ANSI compliant stored procedures!”  They don’t have a company to let the nasty money affect their priorities.

They don’t do competitive benchmarks, as they are all lies.  That’s for impartial third parties to do.  They do publish their regression tests vs themselves for transparency.

You get Drizzle via distros.  There are no magic “gold” binaries and people that do that are evil.  But distros sometimes get behind.  pandora-build

They have sane defaults.  If most people are doing to set something (like FRICKING INNODB) they install by default that way.  To install Drizzle, the only mandatory thing to say is the data directory.

install from apt/yum works.  Or configure/make/make install and run drizzled.  No bootstrap, system tables, whatever.

They use plugins.  mySQL plugins are pain, more of a patch really.  You can just add them at startup time, no SQL from a sysadmin.  And no loading during runtime – see “less features” above.  This is still in progress especially config file sniblets.  But plugins are the new black.

They have pluggable protocols.  It ships with mySQL and Drizzle, but you can plug console, HTTP/REST or whatever.  Maybe dbus…  Their in progress Drizzle protocol removes the potential for SQL injection by only delivering one query, has a sharding key in the packet header, supports HTTP-like redirects…

libdrizzle has both client and server ends, and talks mySQL and Drizzle.

So what about my app that always auths to the database with its one embedded common username/password?  Well, you can do none, PAM, LDAP (and well), HTTP. You just say authenticate(user, pass) and it does it.   It has pluggable authorization too, none, LDAP, or hard-coded.

There is a pluggable query filter that can detect and stop dumb queries- without requiring a proxy.

It has pluggable logging – none, syslog, gearman, etc. – and errors too.

Pluggable replication!  A new scheme based on Google protocol buffers, readable in Java, Python, and C++.  It’s logical change (not quite row) based.  Combined with protocol redirects, it’s db migration made easy!

Boots!  A new command line client, on launchpad.net/boots.  It’s pluggable, scriptable, pipes SQL queries, etc.

P.S. mySQL can lick me! (I’m paraphrasing, but only a little.)

1 Comment

Filed under Conferences, DevOps

Velocity 2010: Cloud Security: It Ain’t All Fluffy and Blue Sky Out There!

Cloud security, bugbear of the masses.  For my last workshop of Velocity Day 1 I went to a talk on that topic.  I read some good stuff on it in Cloud Application Architectures on the plane in and could stand some more.  I “minor” in security, being involved in OWASP and all, and if there’s one area full of more FUD right now than cloud computing, it is cloud security.  Let’s see if they can dispel confusion!  (I hope it’s not a fluffy presentation that’s nothing but cloud pictures and puns; so many of these devolve into that.)

Anyway, Ward Spangenberg us Directory of Security operations for Zynga Game Networks, which does Farmville and Mafia Wars.  He gets to handle things like death threats.  He is a founding member of the Cloud Security Alliance ™.

Gratuitous Definition of Cloud Computing time!  If you don’t know it, then you don’t need to worry about it, and should not be reading this right now.

Cloud security is “a nightmare,” says a Cisco guy who wants to sell you network gear.  Why?  Well, it’s so complicated.  Security, performance, and availability are the top 3 rated challenges (read: fears) about the cloud model.

In general the main security fuss is because it’s something new.  Whenever there is anything new and uncharted all the risk averse types flip out.

With the lower level stuff (like IaaS), you can build in security, but with SaaS you have to “RFP” it in because you don’t have direct control.

Top threats to cloud computing:

  • Abuse/nefarious use
  • Insecure APIs
  • And more but the slide is gone.  We’ll go over it later, I hope.  Oh, here’s the list.

Multitenancy

The “process next door” may be acting badly, and with IPs being passed around and reused you can get blacklisted ones or get DoSsed from traffic headed to one.  No one likes to share.  You could get germs.  Anyway, they have to manage 13,000 IPs and whitelisting them is arduous.

Not Hosted Here Syndrome

You don’t have insight into locations and other “data center level” stuff.  Even if they have something good, like a SAS 70 certification, you still don’t have insight into who exactly is touching your stuff.  Azure is nice, but have you tried to get your logs?  You can’t see them.  Sad.

Management tools and development frameworks don’t have all the security features they should.  Toolsets are immature and stuff like forensics are nonexistent.  And PaaS environments that don’t upgrade quickly end up being a large attack surface for “known vulnerabilities.”  You can reprovision “quickly” but it’s not instantaneous.

DoS

Stuff like DDoS and botnets are classic abuse.  He says there’s “always something behind it” – people don’t just DoS you for no profit!  And only IaaS and PaaS should be concerned about it!  I think that’s quite an overstatement, especially for those of us who don’t run 13,000 servers – people do DoS for kicks and for someone with 100 or fewer servers, they can be effective at it.

Note “Clobbering the Cloud” from DefCon 17.

Insecure Coding

XSS, injection, CSRF, all the usual… Use the tools.  Validate input.  Review code.  And insecure crypto, because doing real crypto is hard.

Malicious insiders/Pissy outsiders

Devs, consultants, and the cloud company.  You need redundant checks.  Need transparent review.

Shared Technology Issues

With a virtualized level, you can always potentially attack through it.  Check out Cloudburst and Red Pill/Blue Pill.

Data Loss and Leakage

Can happen.  Do what you would normally do to control it.  Encrypt some stuff.

Account or Service Hijacking

Users aren’t getting brighter.  Phishing etc. works great.  There’s companies like Damballa that work against this.  Malware is very smart in lots of cases, using metrics, self-improving.

Public deployment security impacts

Advantages – anonymizing effect, large security investments, pre-certification, multisite redundancy, fault tolerance.

Disadvantages – collateral damage, data & AAA security requirements, regulatory, multi-jurisdictional data stores, known vulnerabilities are global.

Going hybrid public/private helps some but increases complexity and adds data and credential exchange issues.

IaaS issues

Advantages: Control of encryption, minimized privileged user attacks, familiar AAA mechanisms, standardized and cross-vendor deployment, full control at VM level.

Disadvantages: Account hijacking, credential management, API security risks, lack of role based auth, full responsibility for ops, and dependence on the security of the virtualization layer.

PaaS Issues

Advantages: Less operational responsibility, multi-site business continuity, massive scale and resiliency, simpler compliance analysis, framework security features.

Disadvantages: Less operational control, vendor lockin, lack of security tools, increased likelihood of privileged user attack, cloud provider viability.

SaaS Issues

Advantages: Clearly defined access controls, vendor’s responsible for data center and app security, predictable scope of account compromise, integrationwith directory services, simplified user ACD.

Disadvantages: Inflexible reporting and features, lack of version control, inability to layer security controls, increased vulnerability to privileged user attacks, no control over legal discovery.

Q&A

If  you are using something like Flash that goes in the client, how do you protect your IP?  You don’t.  Can’t.  It’ll get reverse engineered.  You can do some mitigations.  Try to detect it.  Sic lawyers on them.  Fingerprint code.

Yes, he plays all their games.

In the end, it’s about risk management.  You can encrypt all the data you put in the cloud, but what if they compromise your boxes you do the encryption on,  or what if they try to crack your encryption with a whole wad of cloud boxes?  Yep.  It brings the real nature of security into clearer relief – it’s a continuum of stopping attacks by goons and being vulnerable to attacks by Chinese government and organized crime funded ninja Illuminati.

Can you make a cloud PCI compliant?  Sure.  Especially if you know how to “work” your QSA, because in the end there’s a lot of judgment calls in the audit process.  Lots of encryption even on top of SSL; public key crypt it from browser up using JS or something, then recrypt with an internal only key.  Use your payment provider’s facilities for hashing or 30-day authorizations and re-auth.  Throw the card number away ASAP and you’re good!  Protecting your keys is the main problem in the all-public cloud.  (Could you ssh-agent it, inject it right into memory of the cloud boxes from on premise?)

Private cloud vs public cloud?  Well, with private you own the infrastructure.

This session was OK; I suspect most Velocity people expect something a little more technical.  There weren’t a lot of takeaways for an ops person – it was more of an ISSA or OWASP “technology decisionmaker”  focused presentation.  If he had just put in a couple hardcore techie things it would have helped.  As it was, it was a long list of security threats that are all existing system security threats too.  How’s this different?  What are some specific mitigations; many of these were offered as “be careful!”  Towards the end with the specific IaaS/PaaS/SaaS implications it got better though.

8 Comments

Filed under Cloud, Conferences, DevOps, Security

CloudCamp Austin Is Soon!

Mark your calendars; Thursday of next week (June 10) is CloudCamp here in Austin!  It’s in North Austin at Pervasive’s offices (Riata Trace) from 5:30-10:00 PM.  Get details and sign up here.

Leave a comment

Filed under Cloud

Amazon Web Services – Convert To/From VMs?

In the recent Amazon AWS Newsletter, they asked the following:

Some customers have asked us about ways to easily convert virtual machines from VMware vSphere, Citrix Xen Server, and Microsoft Hyper-V to Amazon EC2 instances – and vice versa. If this is something that you’re interested in, we would like to hear from you. Please send an email to aws-vm@amazon.com describing your needs and use case.

I’ll share my reply here for comment!

This is a killer feature that allows a number of important activities.

1.  Product VMs.  Many suppliers are starting to provide third-party products in the form of VMs instead of software to ease install complexity, or in an attempt to move from a hardware appliance approach to a more-software approach.  This pretty much prevents their use in EC2.  <cue sad music>  As opposed to “Hey, if you can VM-ize your stuff then you’re pretty close to being able to offer it as an Amazon AMI or even SaaS offering.”  <schwing!>

2.  Leveraging VM Investments.  For any organization that already has a VM infrastructure, it allows for reduction of cost and complexity to be able to manage images in the same way.  It also allows for the much promised but under-delivered “cloud bursting” theory where you can run the same systems locally and use Amazon for excess capacity.  In the current scheme I could make some AMIs “mostly” like my local VMs – but “close” is not good enough to use in production.

3.  Local testing.  I’d love to be able to bring my AMIs “down to me” for rapid redeploy.  I often find myself having to transfer 2.5 gigs of software up to the cloud, install it, find a problem, have our devs fix it and cut another release, transfer it up again (2 hour wait time again, plus paying $$ for the transfer)…

4.  Local troubleshooting. We get an app installed up in the cloud and it’s not acting quite right and we need to instrument it somehow to debug.  This process is much easier on a local LAN with the developers’ PCs with all their stuff installed.

5.  Local development. A lot of our development exercises the Amazon APIs.  This is one area where Azure has a distinct advantage and can be a threat; in Visual Studio there is a “local Azure fabric” and a dev can write their app and have it running “in Azure” but on their machine, and then when they’re ready deploy it up.  This is slightly more than VM consumption, it’s VMs plus Eucalyptus or similar porting of the Amazon API to the client side, but it’s a killer feature.

Xen or VMWare would be fine – frankly this would be big enough for us I’d change virtualization solutions to the one that worked with EC2.

I just asked one of our developers for his take on value for being able to transition between VMs and EC2 to include in this email, and his response is “Well, it’s just a no-brainer, right?”  Right.

1 Comment

Filed under Cloud

Microsoft Azure for Dummies – or for Smarties?

What Is Microsoft Azure?

I’m going to attempt to explain Microsoft Azure in “normal Web person” language.  Like many of you, I am more familiar with Linux/open source type solutions, and like many of you, my first forays into cloud computing have been with Amazon Web Services.  It can often be hard for people not steeped in Redmondese to understand exactly what the heck they’re talking about when Microsoft people try to explain their offerings.  (I remember a time some years ago I was trying to get a guy to explain some new Microsoft data access thing with the usual three letter acronym name.  I asked, “Is it a library?  A language?  A protocol?  A daemon?  Branding?  What exactly is this thing you’re trying to get me to uptake?”  The reply was invariably “It’s an innovative new way to access data!”  Sigh.  I never did get an answer and concluded “Never mind.”)

Microsoft has released their new cloud offering, Azure.  Our company is a close Microsoft partner since we use a lot of their technologies in developing our company’s desktop software products, so as “cloud guy” I’ve gotten some in depth briefings and even went to PDC this year to learn more (some of my friends who have known me over the course of my 15 years of UNIX administration were horrified).  “Cloud computing” is an overloaded enough term that it’s not highly descriptive and it took a while to cut through the explanations to understand what Azure really is.  Let me break it down for you and explain the deal.

Point of Comparison: Amazon (IaaS)

In Amazon EC2, as hopefully everyone knows by now, you are basically given entire dynamically-provisioned, hourly-billed virtual machines that you load OSes on and install software and all that.  “Like servers, but somewhere out in the ether.”  Those kinds of cloud offerings (e.g. Amazon, Rackspace, most of them really) are called Infrastructure As A Service (IaaS).  You’re responsible for everything you normally would be, except for the data center work.  Azure is not an IaaS offering but still bears a lot of similarities to Amazon; I’ll get into details later.

Point of Comparison: Google App Engine (PaaS)

Take Google’s App Engine as another point of comparison.  There, you just upload your Python or Java application to their portal and “it runs on the Web.”  You don’t have access to the server or OS or disk or anything.  And it “magically” scales for you.  This approach is called Platform as a Service (PaaS).   They provide the full platform stack, you only provide the end application.  On the one hand, you don’t have to mess with OS level stuff – if you are just a Java programmer, you don’t have to know a single UNIX (or Windows) command to transition your app from “But it works in Eclipse!” to running on a Web server on the Internet.  On the other hand, that comes with a lot of limitations that the PaaS providers have to establish to make everything play together nicely.  One of our early App Engine experiences was sad – one of our developers wrote a Java app that used a free XML library to parse some XML.  Well, that library had functionality in it (that we weren’t using) that could write XML to disk.  You can’t write to disk in App Engine, so its response was to disallow the entire library.  The app didn’t work and had to be heavily rewritten.  So it’s pretty good for code that you are writing EVERY SINGLE LINE OF YOURSELF.  Azure isn’t quite as restrictive as App Engine, but it has some of that flavor.

Azure’s Model

Windows Azure falls between the two.  First of all, Azure is a real “hosted cloud” like Amazon Web Services, like most of us really think about when we think cloud computing; it’s not one of these on premise things that companies are branding as “cloud” just for kicks. That’s important to say because it seems like nowadays the larger the company, the more they are deliberately diluting the term “cloud” to stick their products under its aegis.  Microsoft isn’t doing that, this is a “cloud offering” in the classical (where classical means 2008, I guess) sense.

However, in a number of important ways it’s not like Amazon.  I’d definitely classify it as a PaaS offering.  You upload your code to “Roles” which are basically containers that run your application in a Windows 2008(ish) environment.  (There are two types – a “Web role” has a stripped down IIS provided on it, a “Worker role” doesn’t – the only real difference between the two.)  You do not have raw OS access, and cannot do things like write to the registry.  But, it is less restrictive than App Engine.  You can bundle up other stuff to run in Azure – even run Java apps using Apache Tomcat.  You have to be able to install whatever you want to run “xcopy only” – in other words, no fancy installers, it needs to be something you could just copy the files to a Windows PC, without administrative privilege, and run a command from the command line and have it work.  Luckily, Tomcat/Java fits that description. They have helper packs to facilitate doing this with Tomcat, memcached, and Apache/PHP/MediaWiki.  At PDC they demoed Domino’s Pizza running their Java order app on it and a WordPress blog running on it.  So it’s not only for .NET programmers.  Managed code is easier to deploy, but you can deploy and run about anything that fits the “copy and run command line” model.

I find this approach a little ironic actually.  It’s been a lot easier for us to get the Java and open source (well, the ones with Windows ports) parts of our infrastructure running on Azure than Windows parts!  Everybody provides Windows stuff with an installer, of course, and you can’t run installers on Azure.  Anyway, in its core computing model it’s like Google App Engine – it’s more flexible than that (good) but it doesn’t do automatic scaling (bad).  If it did autoscaling I’d be willing to say “It’s better than App Engine in every way.”

In other ways, it’s a lot like Amazon.  They offer a variety of storage options – blobs (like S3), tables (like SimpleDB), queues (like SQS), drives (like EBS), SQL Azure (like RDS).  They have an integral CDN.  They do hourly billing.  Pricing is pretty similar to Amazon – it’s hard to totally equate apples to apples, but Azure compute is $0.12/hr and an Amazon small Windows image compute is $0.12/hr (Coincidence?  I think not.).  And you have to figure out scaling and provisioning yourself on Amazon too – or pay a lot of scratch to one of the provisioning companies like RightScale.

What’s Unique and Different

Well, the largest thing that I’ve already mentioned is the PaaS approach.  If you need OS level access, you’re out of luck;  if you don’t want to have to mess with OS management, you’re in luck!  So to the first order of magnitude, you can think of Azure as “like Amazon Web Services, but the compute uses more of a Google App Engine model.”

But wait, there’s more!

One of the biggest things that Azure brings to the table is that, using Visual Studio, you can run a local Azure “fabric” on your PC, which means you can develop, test, and run cloud apps locally without having to upload to the cloud and incur usage charges.  This is HUGE.  One of the biggest pains about programming for Amazon, for instance, is that if you want to exercise any of their APIs, you have to do it “up there.”  Also, you can’t move images back and forth between Amazon and on premise.  Now, there are efforts like EUCALYPTUS that try to overcome some of this problem but in the end you pretty much just have to throw in the towel and do all dev and test up in the cloud.  Amazon and Eclipse (and maybe Xen) – get together and make it happen!!!!

Here’s something else interesting.  In a move that seems more like a decision from a typical cranky cult-of-personality open source project, they have decided that proper Web apps need to be asynchronous and message-driven, and by God that’s what you’re going to do.  Their load balancers won’t do sticky sessions (only round robin) and time out all connections between all tiers after 60 seconds without exception.  If you need more than that, tough – rewrite your app to use a multi-tier message queue/event listener model.  Now on the one hand, it’s hard for me to disagree with that – I’ve been sweating our developers, telling them that’s the correct best-practice model for scalability on the Web.  But again you’re faced with the “Well what if I’m using some preexisting software and that’s not how it’s architected?” problem.  This is the typical PaaS pattern of “it’s great, if you’re writing every line of code yourself.”

In many ways, Azure is meant to be very developer friendly.  In a lot of ways that’s good.  As a system admin, however, I wince every time they go on about “You can deploy your app to Azure just by right clicking in Visual Studio!!!”  Of course, that’s not how anyone with a responsibly controlled production environment would do it, but it certainly does make for fast easy adoption in development.   The curve for a developer who is “just” a C++/Java/.NET/whatever wrangler to get up and going on an IaaS solution like Amazon is pretty large comparatively; here, it’s “go sign up for an account and then click to deploy from your IDE, and voila it’s running on the Intertubes.”  So it’s a qualified good – it puts more pressure on you as an ops person to go get the developers to understand why they need to utilize your services.  (In a traditional server environment, they have to go through you to get their code deployed.)  Often, for good or ill, we use the release process as a touchstone to also engage developers on other aspects of their code that need to be systems engineered better.

Now, that’s my view of the major differences.  I think the usual Azure sales pitch would say something different – I’ve forgotten two of their huge differentiators, their service bus and access control components.  They are branded under the name “AppFabric,” which as usual is a name Microsoft is also using for something else completely different (a new true app server for Windows Server, including projects formerly code named Dublin and Velocity – think of it as a real WebLogic/WebSphere type app server plus memcache.)

Their service bus is an ESB.  As alluded to above, you’re going to want to use it to do messaging.   You can also use Azure Queues, which is a little confusing because the ESB is also a message queue – I’m not clear on their intended differentiation really.  You can of course just load up an ESB yourself in any other IaaS cloud solution too, so if you really want one you could do e.g. Apache ServiceMix hosted on Amazon.  But, they are managing this one for you which is a plus.  You will need to use it to do many of the common things you’d want to do.

Their access control – is a mess.  Sorry, Microsoft guys.  The whole rest of the thing, I’ve managed to cut through the “Microsoft acronyms versus the rest of the world’s terms and definitions” factor, but not here.   “You see, you use ACS’s WIF STS to generate a SWT,” says our Microsoft rep with a straight face.   They seem to be excited that it will use people’s Microsoft Live IDs, so if you want people to have logins to your site and you don’t want to manage any of that, it is probably nice.  It takes SAML tokens too, I think, though I’m not sure if the caveats around that end up equating to “Well, not really.”  Anyway, their explanations have been incoherent so far and I’m not smelling anything I’m really interested in behind it.  But there’s nothing to prevent you from just using LDAP and your own Internet SSO/federation solution.  I don’t count this against Microsoft because no one else provides anything like this, so even if I ignore the Azure one it doesn’t put it behind any other solution.

The Future

Microsoft has said they plan to add on some kind of VM/IaaS offering eventually because of the demand.  For us, the PaaS approach is a bit of a drawback – we want to do all kinds of things like “virus scan uploaded files,” “run a good load balancer,” “run an LDAP server”, and other things that basically require more full OS access.  I think we may have an LDAP direction with the all-Java OpenDS, but it’s a pain point in general.

I think a lot of their decisions that are a short term pain in the ass (no installs, no synchronous) are actually good in the long term.  If all developers knew how to develop async and did it by default, and if all software vendors, even Windows based ones, provided their product in a form that could just be “copy and run without admin privs” to install, the world would be a better place.  That’s interesting in that “Sure it’s hard to use now but it’ll make the world better eventually” is usually heard from the other side of the aisle.

Conclusion

Azure’s a pretty legit offering!  And I’m very impressed by their velocity.  I think it’s fair to say that overall Azure isn’t quite as good as Amazon except for specific use cases (you’re writing it all in .NET by hand in Visual Studio) – but no one else is as good as Amazon either (believe me, I evaluated them) and Amazon has years of head start; Azure is brand new but already at about 80%! That puts them into the top 5 out of the gate.

Without an IaaS component, you still can’t do everything under the sun in Azure.  But if you’re not depending on much in the way of big third party software chunks, it’s feasible; if you’re doing .NET programming, it’s very compelling.

Do note that I haven’t focused too much on the attributes and limitations of cloud computing in general here – that’s another topic – this article is meant to compare and contrast Azure to other cloud offerings so that people can understand its architecture.

I hope that was clear.  Feel free and ask questions in the comments and I’ll try to clarify!

Leave a comment

Filed under Cloud

A Case For Images

After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article “Golden Image or Foil Ball?“, I was thinking pretty hard about the use of images in our new automated infrastructure.  He’s pretty against them.  After careful consideration, however, I think judicious use of images is the right thing to do.

My top level thoughts on why to use images.

  1. Speed – Starting a prebuilt image is faster than reinstalling everything on an empty one.  In the world of dynamic scaling, there’s a meaningful difference between a “couple minute spinup” and a “fifteen minute spinup.”
  2. Reliability – The more work you are doing at runtime, the more there is to go wrong.  I bet I’m not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of ’em.  And the more stuff you’re pulling to build your image, the more failure points you have.
  3. Flexibility – Dynamically building from stem cell kinda makes sense if you’re using 100% free open source and have everything automated.  What if, however, you have something that you need to install that just hasn’t been scripted – or is very hard to script?  Like an install of some half-baked Windows software that doesn’t have a command line installer and you don’t have a tool that can do it?  In that case, you really need to do the manual install in non-realtime as part of a image build.  And of course many suppliers are providing software as images themselves nowadays.
  4. Traceability – What happens if you need to replicate a past environment?  Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons.  “I keep a bunch of old software repo versions so I can mostly build a machine like it” – somewhat less so.

In the end, it’s a question of using intermediate deliverables.  Do you recompile all the code and every third party package every time you build a server?  No, you often use binaries – it’s faster and more reliable.  Binaries are the app guys’ equivalent of “images.”

To address Luke’s three concerns from his article specifically:

  1. Image sprawl – if you use images, you eventually have a large library of images you have to manage.  This is very true – but you have to manage a lot of artifacts all up and down the chain anyway.  Given the “manual install” and “vendor supplied image” scenarios noted above, if you can’t manage images as part of your CM system than it’s just not a complete CM system.
  2. Updating your images – Here, I think Luke makes some not entirely valid assumptions.  He notes that once you’re done building your images, you’re still going to have to make changes in the operational environment (“bootstrapping”).  True.  But he thinks you’re not going to use the same tool to do it.  I’m not sure why not – our approach is to use automated tooling to build the images – you don’t *want* to do it manually for sure – and Puppet/Chef/etc. works just fine to do that.  So if you have to update something at the OS level, you do that and let your CM system blow everything on top – and then burn the image.  Image creation and automated CM aren’t mutually exclusive – the only reason people don’t use automation to build their images is the same reason they don’t always use automation on their live servers, which is “it takes work.”  But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it’s a good conservation of work to use the same package for both. (Besides bootstrapping, there’s other stuff like moving content that shouldn’t go on images.)
  3. Image state vs running state – This one puzzles me.  With images, you do need to do restarts to pull in image-based changes.  But with virtually all software and app changes you have to as well – maybe not a “reboot,” but a “service restart,” which is virtually as disruptive.  Whether you “reboot  your database server” or “stop and start your database server, which still takes a couple minutes”, you are planning for downtime or have redundancy in place.  And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that “oh, pull that change whenever you want to Mr. Application Server” doesn’t really work for.

In closing, I think images are useful.  You shouldn’t treat them as a replacement for automated CM – they should be interim deliverables usually generated by, and always managed by, your automated CM.  If you just use images in an uncoordinated way, you do end up with a foil ball.  With sufficient automation, however, they’re more like Russian nesting dolls, and have advantages over starting from scratch with every box.

Leave a comment

Filed under DevOps, Uncategorized

OpsCamp Debrief

I went to OpsCamp this last weekend here in Austin, a get-together for Web operations folks specifically focusing on the cloud, and it was a great time!  Here’s my after action report.

The event invite said it was in the Spider House, a cool local coffee bar/normal bar.  I hadn’t been there before, but other people that had said “That’s insane!  They’ll never fit that many people!  There’s outside seating but it’s freezing out!”  That gave me some degree of trepidation, but I still racked out in time to get downtown by 8 AM on a Saturday (sigh!).  Happily, it turned out that the event was really in the adjacent music/whatnot venue also owned by Spider House, the United States Art Authority, which they kindly allowed us to use for free!  There were a lot of people there; we weren’t overfilling the place but it was definitely at capacity, there were near 100 people in attendance.

I had just heard of OpsCamp through word of mouth, and figured it was just going to be a gathering of local Austin Web ops types.  Which would be entertaining enough, certainly.  But as I looked around the room I started recognizing a lot of guys from Velocity and other major shows; CEOs and other high ranked guys from various Web ops related tool companies.  Sponsors included John Willis and Adam Jacob (creator of Chef) from Opscode , Luke Kanies from Reductive Labs (creator of Puppet), Damon Edwards and Alex Honor from DTO Solutions (formerly ControlTier), Mark Hinkle and Matt Ray from Zenoss, Dave Nielsen (CloudCamp), Michael Coté (Redmonk), Bitnami, Spiceworks, and Rackspace Cloud.  Other than that, there were a lot of random Austinites and some guys from big local outfits (Dell, IBM).

You can read all the tweets about the event if you swing that way.

OpsCamp kinda grew out of an earlier thing, BarCampESM, also in Austin two years ago.  I never heard about that, wish I had.

How It Went

I had never been to an “unconference” before.  Basically there’s no set agenda, it’s self-emergent.  It worked pretty well.  I’ll describe the process a bit for other noobs.

First, there was a round of lightning talks.  Brett from Rackspace noted that “size matters,” Bill from Zenoss said “monitoring is important,” and Luke from Reductive claimed that “in 2-4 years ‘cloud’ won’t be a big deal, it’ll just be how people are doing things – unless you’re a jackass.”

Then it was time for sessions.  People got up and wrote a proposed session name on a piece of paper and then went in front of the group and pitched it, a hand-count of “how many people find this interesting” was taken.

Candidates included:

  • service level to resolution
  • physical access to your cloud assets
  • autodiscovery of systems
  • decompose monitoring into tool chain
  • tool chain for automatic provisioning
  • monitoring from the cloud
  • monitoring in the cloud – widely dispersed components
  • agent based monitoring evolution
  • devops is the debil – change to the role of sysadmins
  • And more

We decided that so many of these touched on two major topics that we should do group discussions on them before going to sessions.  They were:

  • monitoring in the cloud
  • config mgmt in the cloud

This seemed like a good idea; these are indeed the two major areas of concern when trying to move to the cloud.

Sadly, the whole-group discussions, especially the monitoring one, were unfruitful.  For a long ass time people threw out brilliant quips about “Why would you bother monitoring a server anyway” and other such high-theory wonkery.  I got zero value out of these, which was sad because the topics were crucially interesting – just too unfocused; you had people coming at the problem 100 different ways in sound bytes.  The only note I bothered to write down was that “monitoring porn” (too many metrics) makes it hard to do correlation.  We had that problem here, and invested in a (horrors) non open-source tool, Opnet Panorama, that has an advanced analytics and correlation engine that can make some sense of tens of thousands of metrics for exactly that reason.

Sessions

There were three sessions.  I didn’t take many notes in the first one because, being a Web ops guy, I was having to work a release simultaneously with attending OpsCamp 😛

Continue reading

10 Comments

Filed under DevOps, Uncategorized

Come To OpsCamp!

Next weekend, Jan 30 2009, there’s a Web Ops get-together here in Austin called OpsCamp!  It’ll be a Web ops “unconference” with a cloud focus.  Right up our alley!  We hope to see you there.

Leave a comment

Filed under DevOps, Uncategorized

Cloud Headaches?

The industry is abuzz with people who are freaked out about the outages that Amazon and other cloud vendors have had.  “Amazon S3 Crash Raises Doubts Among Cloud Customers,” says InformationWeek!

This is because people are going into cloud computing with retardedly high expectations.  This year at Velocity, Interop, etc. I’ve seen people just totally in love with cloud computing – Amazon’s specifically but in general as well.  And it’s a good concept for certain applications.  However, it is a computing system just like every other computing system devised previously by man.  And it has, and will have, problems.

Whether you are using in house systems, or a SaaS vendor, or building “in the cloud,” you have the same general concerns.  Am I monitoring my systems?  What is my SLA?  What is my recourse if my system is not hitting it?  What’s my DR plan?

SaaS is a special case of cloud computing in general.  And if you’re a company relying on it, when you contract with a SaaS vendor you get SLAs established and figure out what the remedy is if they breach it.  If you are going into a relationship where you are just paying money for a cloud VM, storage, etc. and there is no enforceable SLA in the relationship, then you need to build the risk of likely and unremediable outages into your business plan.

I hate to break it to you, but the IT people working at Amazon, Google, etc. are not all that smarter than the IT people working with you.  So an unjustified faith in a SaaS or cloud vendor – “Oh, it’s Amazon, I’m sure they’ll never have an outage of any sort – either across their entire system or localized to my part of it – and if they do I’m sure the $100/month I’m paying them will cause them to give a damn about me” – is an unreasonable expectation on its face.

Clouds and cloud vendors are a good innovation.  But they’re like every other computing innovation and vendor selling it to you.  They’ll have bugs and failures.  But treating them as if they won’t is a failure on your part, not theirs.

2 Comments

Filed under Cloud, Uncategorized