Author Archives: Ernest Mueller

Ernest Mueller's avatar

About Ernest Mueller

Ernest is the VP of Engineering at the cloud and DevOps consulting firm Nextira in Austin, TX. More...

Velocity and DevOpsDays

Two of the three agile admins – myself and Peco – will be in Santa Clara for Velocity next week, and DevOpsDays following in Mountain View.  If you’re going to be there (or lurk in the Silicon Valley area) feel free and ping us to meet up!

We’ll be blogging up all the fun from both events, though often we start out with liveblogging and then fall behind and the final parts don’t come out till somewhat after.  But, that’s life in the big city.

Leave a comment

Filed under Conferences, DevOps

Another CloudCamp Austin Wrapup

James already posted, but I took notes too so here’s my thoughts!

CloudCamp was a great time.  Dave Nielsen did a great job facilitating it.  Pervasive Software hosted the shindig.  It started with Mike Hoskins, Pervasive CTO, telling us about how they started an “innovation lab” to reinvigorate Pervasive after being in business for 25 years, and that led to their DataCloud2 product hosted on EC2.

Then there were three lightning talks.

Barton James, Dell cloud evangelist, talked about the continuum between traditional compute to private cloud to public cloud, and how the midsection of that curve will shift over time to solidly center over private cloud.  I think that’s accurate; all the data center nonsense of the last number of years is certainly starting to convince us that you only want to manage hardware if there’s no other choice…   He talked about paths to the cloud- either starting with virtualization and then adding on capabilities until something is really cloud-ready, or just greenfielding something new (that’s what we’re doing!). It was good, apparently Dell has thought more about the cloud since their original ill-conceived attempt to trademark it as a server name.

Oscar Padilla, a senior engineer with Pervasive, spoke about their path moving their existing software to the cloud (very interesting to us,  since we’re doing the same) and the duality in being a both a cloud consumer (Amazon IaaS) and a cloud provider (Pervasive’s SaaS product).  This is an increasingly common pattern; I’d say that being a SaaS provider and not using IaaS  (unless you’re really huge) is likely a mistake on your part.  He also talked about the importance of adding an API so others can leverage your software – this is a huge point and it’s bizarre to me other people still aren’t getting this.

Finally, Walter Falk of IBM spoke about how the hybrid cloud is the bomb.  Hybrid cloud, or “cloud bursting,” is where you run your own nice and cheap local hardware for minimum loads and scale into the cloud for extra capacity.  He also showed a diagram indicating what kinds of workloads are low hanging fruit for cloudification (information intensive, isolated workloads, mature processes…  You’ve probably all seen the slide by now).  And he talked about how ecosystem is very important even for IBM – other people doing good stuff in the space.  “Go to ibm.com/cloud!”

Then we did a little impromptu panel thing, where I and some other folks were drafted up to answer questions.  This revealed something interesting, which is that a LOT of the people there were apparently coming from the cloud provider point of view, and had questions about power consumption and what hypervisor options there are.  As an IaaS consumer/SaaS provider, my main input there is “I don’t want to care about all that nonsense, thus I use IaaS!”   I answered a question about “how to define PaaS,” but my response was not thrilling enough to relate here.

Next came the conference sessions – we did the normal unconference thing of random people writing down topics and doing shows of hands on who cares about that.  The ones that got the largest response were Application Architecture for the Cloud and Systems Management for Cloud Consumers (the latter was mine; the panel gave me the heads up that I’d best add “consumers” to the end of that to not get stuck in storage-container-datacenter hell).

I didn’t go to Application Architecture for the Cloud but spoke to our guys that did and they did something that IMO should have been done in the larger group – did some quick demographics voting!  Bill, one of our devs, tells me that the responses were:

  • What language are you using?  2/3 Java, 1/3 .NET.
  • What cloud are you using?  Vast majority Amazon (even among the .NETters), notable minority Azure, trace amounts of others.
  • Are you internal IT or product focused?  50/50 split.
  • Are you using noSQL stuff?  A small number.
  • Are you using Rails?  No.
  • Are you using SOA/SOAP stuff?  No.
  • Are you using memcache?  A couple are, but more are doing app level caching with JPA or whatnot.

James covered the goings-on in Systems Management for the Cloud well; besides the specific tool takeaways I enjoyed the quote from one of the ServiceMesh guys about the practice of taking your traditional static infrastructure and just implementing it on the cloud without rearchitecting to take advantage of its dynamic nature is called “moving shit to shit.”  I was very impressed with the guys from ServiceMesh and from Pervasive that we met there; we’ve all already hooked up and done lunch to talk more.  All great guys doing some cutting edge stuff.

The last session was on Software to SaaS – taking existing software you sell for on premise use and turning it into a cloud offering.  Phil Fritz from IBM broke a lot of it down very accurately – there are some challenges from the customer side (trust, opex vs capex) but the vast majority of problems you face are internal.  And only a few of those internal issues are really technical in the “make it work in the cloud” sense, the rest are about metering, billing, the sales force not selling it because they don’t understand it or it’s against their usual commission model, forking of code and testing inefficiency, (IBM has a strict rule that there’s not a separate SaaS branch of the software, you have to fold fixes into trunk, which is extremely wise).  This is all very good stuff – our main issues with bringing SaaS to market similarly hasn’t been the technical side, it’s been the product marketers’ doubt, the “it’s not supported” in our ERP/billing system, sales and support staff education…

Then there was a wrapup, but it was like 10 at night on a weeknight so most of the norms had cleared out already.

In closing, it was an awesome event and we made some great contacts for further discussion.  Thanks to Dave and Pervasive for bringing CloudCamp to Austin, and I hope to see another soon!

Leave a comment

Filed under Cloud

F5 On DevOps and WordPress Outages

Lori MacVittie has written a very interesting post on the F5 blog entitled “Devops: Controlling Application Release Cycles to Avoid the WordPress Effect.”

In it, she analyzes a recent WordPress outage and how “feathered” releases can help mitigate impact in multitenant environments.  And specifically talks about how DevOps is one of the keys to accomplishing these kinds of schemes that require apps and systems both to honor them.

Organizations that encourage the development of a devops role and discipline will inevitably realize greater benefits from virtualization and cloud computing because the discipline encourages a broader view of applications, extending the demesne of application architecture to include the application delivery tier.

Nice!  In my previous shop we didn’t use F5s, we used Netscalers, but there was the same interesting divide in that though they were an integral part of the application, they were seen as “Infrastructure’s thing.”  Apps weren’t cognizant of them and whenever functionality needed to be written against them (like cache invalidation when new content was published) it fell to us, the ops team.  And to be honest we discouraged devs from messing with them, because they always wanted some ill-advised new configuration applied when they did. “Can’t we just set all the timeouts to 30 minutes?”

But in the newer friendlier world of DevOps coordination, traditionally “infrastructure” tools like app delivery stuff, monitoring, provisioning, etc. need to be a collaboration area, where code needs to touch them (though in a way Ops can support…)  Anyway, a great article, go check it out.

Leave a comment

Filed under DevOps

Why A HTTP Sniffer Is Awesome

While looking at Petit for my post on log management tools, I was thrilled to see it link to a sniffer that generates Web type logs called Justniffer.  Why, you might ask, isn’t that a pretty fringe thing?  Well settle in while I tell you why it’s bad ass.

We used to run a Web analytics product here called NetGenesis.  Like all very old Web analytics products, it relied on you to gather together all your log files for it to parse, resulting in error prone nightly cronjob kinds of nonsense.  So they came out with a network sniffer that logged into Apache format, like this does apparently.  It worked great and got the info in realtime (as long as the network admins didn’t mess up our network taps, which did happen from time to time).

I quickly realized this sniffer was way better than log aggregation, especially because my environment had all kinds of weird crap like Domino Web servers and IIS5 that don’t log in a civilized manner.  And since it sat between the Web servers and the client, it could log “client time,” “server time”, and had a special “900” error code for client aborts/timeouts.  I self-implemented what would be a predecessor to todays’ RUM tools like Tealeaf and Coradiant on it.  We used it to do realtime traffic analysis, cross-site reporting, and even used it for load testing as we’d transform and replay the captured logs against test servers. Using it also helped us understand the value of the Steve Souders front end performance stuff when he came around.

Eventually our BI folks moved to a Javascript page tag based system, which are the modern preference in Web analytics systems.  Besides the fact that these schemes only get pages that can execute JS and not all the images and other assets, we discovered that they were reasonably flawed and were losing about 10% of the traffic that we were seeing in the network sniffer log.  After a long and painful couple months, we determined that the lost traffic was from no known source and happened with other page tag based systems (Google Analytics, etc.), not just this supplier’s tool, and the BI folks finally just said “Well…  It gives us pretty clickstreams and stuff, let’s go ahead with it.”  Sadly that sunset our use of the Netgenesis network sniffer and there wasn’t another like it in the open source realm (I looked).  Eventually we bought a Coradiant to do RUM (the sales rep kept trying to explain this “new network RUM concept” to us and kept being taken aback and how advanced the questions were we asked) but I missed the accessibility of my sniffer log…  Big log aggregators like Splunk help fill that gap somewhat but sometimes you really want to grep|cut|sort|uniq the raw stuff.

On the related topic of log replayers, we have really wanted one for a long time.  No one has anything decent.  We’ve bugged every supplier that we deal with on any related product, from RUM to load testing to whatever.  Recording a specific transaction and using that is fine, but nothing compares to the demented diversity of real Internet traffic.  We wrote a custom replayer for our sniffer log, although it didn’t do POST (didn’t capture payloads – looks like justniffer can though!) and got a lot of mileage out of it.  Found al ot of app bugs before going to production with that baby.  Anyway, none of the suppliers can figure it out (Oracle just put together a DB traffic version of this in their new version 12 though).  Now that there’s a sniffer we can use, we already have a decent replayer, we’re back in business!  So I’m excited, it’s a blast from the past but also one of those core little things that you can’t believe there isn’t one of, and that empowers someone to do a whole lot of cool stuff.

Leave a comment

Filed under DevOps

Good DevOps Discussions

An interesting point and great discussion on “what is DevOps”, including a critique about it not including other traditional Infrastructure roles well, on Rational Survivability (heh, we’re using the same blog theme.  I feel like a girl in the same dress as another at a party.).  It seems to me that some of the complaints about DevOps – only a little here, but a lot more from Andi Mann, Ubergeek – seem to think DevOps is some kind of developer power play to take over operations.  At least from my point of view (an ops guy driving a devops implementation in a large organization) that is absolutely not the case.  Seems to me to be a case of over-touchiness based on the explicit and implicit critique of existing Infrastructure processes that DevOps represents.  Which is natural; agile development had/has the exact same challenge.

Note that DevOps is starting to get more press; here’s a cnet article talking about DevOps and the cloud (two great tastes that taste great together…).

And here’s a bonus slideshare presentation on “From Agile Development to Agile Operations” that is really good.

2 Comments

Filed under DevOps

Log Management Tools

We’re researching all kinds of tools as we set up our new cloud environment, I figure I may as well share for the benefit of the public…

Most recently, we’re looking at log management.  That is, a tool to aggregate and analyze your log files from across your infrastructure.  We love Splunk and it’s been our tool of choice in the past, but it has two major drawbacks.  One, it’s  quite expensive. In our new environment where we’re using a lot of open source and other new-format vendors, Splunk is a comparatively big line item for a comparatively small part of an overall systems management portfolio.

Two, which is somewhat related, it’s licensed by amount of logs it processes per day.  Which is a problem because when something goes wrong in our systems, it tends to cause logging levels to spike up.  In our old environment, we keep having to play this game where an app will get rolled to production with debug on (accidentally or deliberately) or just be logging too much or be having a problem causing it to log too much, and then we have to blacklist it in Splunk so it doesn’t run us over our license and cause the whole damn installation to shut off.  It took an annoying amount of micromanagement for this reason.

Other than that, Splunk is the gold standard; it pulls anything in, graphs it, has Google-like search, dashboards, reports, alerts, and even crazier capabilities.

Now on the “low end” there are really simple log watchers like swatch or logwatch.  But we’d really like something that will aggregate ALL our logs (not just syslog stuff using syslog-ng – app server logs, application logs, etc.), ideally from both UNIX and Windows systems, and make them usefully searchable.  Trying to make everything and everyone log using syslog is an ever receding goal.  It’s a fool’s paradise.

There’s the big appliance vendors on the “high end” like LogLogic and LogRhythm, but we looked at them when we looked at Splunk and they are not only expensive but also seem to be “write only solutions” – they aggregate your logs to meet compliance requirements, and do some limited pattern matching, but they don’t put your logs to work to help you in your actual work of application administration the dozen ways Splunk does.  At best they are “SIEM”s – security information and event managers – that alert on naughty intruders.  But with Splunk I can do everything from generate a report of 404s to send to our designers to fix their bad links/missing images to graph site traffic to make dashboards for specific applications for their developers to review.  Plus, as we’re doing this in the cloud, appliances need not apply.  (Ooo, that’s a catchy phrase, I’ll have to use that for a separate post!)

I came across three other tools that seem promising:

  • Logscape from Liquidlabs – does graphing and dashboards like Splunk does.  And “live tail” – Splunk mysteriously took this out when they revved from version 3 to 4!  Internet rumor is that it’s a lot cheaper.  Seems like a smaller, less expensive Splunk, which is a nice thing to be, all considered.
  • Octopussy – open source and Perl based (might work on Windows but I wouldn’t put money on it).  Does alerting and reporting.  Much more basic, but you can’t beat the price.  Don’t think it’ll meet our needs though.
  • Xpolog – seems nice and kinda like Splunk.  Most of the info I can find on it, though, are “What about xpolog, is good!” comments appended to every forum thread/blog post about Splunk I can find, which is usually a warning sign – that kind of guerrilla marketing gets old quick IMO.  One article mentions looking into it and finding it more expensive but with some nice features like autodiscovery, but not as open as Splunk.

Anyone have anything to add?  Used any of these?  We’ve gotten kind of addicted to having our logs be immediately accessible, converted into metrics, etc.  I probably wouldn’t even begrudge Splunk the money if it weren’t for all the micromanagement you have to put into running it.  It’s like telling the fire department “you’re licensed for a maximum of three fires at a time” – it verges on irresponsible.

21 Comments

Filed under DevOps

Automated Testing Basics

Everyone thinks they know how to test… Most people don’t.  And most people think they know what automated testing is, it’s just that they never have time for it.

Into this gap steps an interesting white paper, The Automated Testing Handbook, from Software Test Professionals, an online SW test community.  You have to register for a free community account to download it, but I did and it’s A great 100-page conceptual introduction to automated testing. We’re trying to figure out testing right now – it’s for our first SaaS product so that’s a challenge,  and also because of the devops angle we’re trying to figure out “what does ‘unit test’ mean regarding an OS build or a Tomcat server before it gets apps deployed?'”  So I was interested in reading a basic theory paper to give me some grounding; I like doing that when engaging in a new field rather than just hopping from specific to specific.

Some takeaways from the paper:

  • Automated testing isn’t just capture/replay, that’s unmaintainable.
  • Automated testing isn’t writing a program to test a program, that’s unscalable.
  • What you want is a framework to automate the process around the testing.  Some specific tests can/should be automated and others can’t/shouldn’t.  But a framework helps you with both, and lets you make more reusable tests.

Your tests should be version controlled, change controlled, etc. just like software (testware?).

Great quote from p.47:

A thorough test exercises more than just the application itself: it ultimately tests the entire environment, including all of the supporting hardware and surrounding software.

We are having issues with that right now and our larval SaaS implementation – our traditional software R&D folks historically can just do their own narrow scope tests, and say “we’re done.”  Now that we’re running this as a big system in the cloud, we need to run integration tests including the real target environment, and we’re trying to convince them that’s something they need to spend effort in.

They go on to mention major test automation approaches, including capture/replay, data-driven, and table-driven – their pros and cons and even implementation tips.

  1. Capture/replay is basically recording your manual test so you can do it again a lot.  Which makes it helpful for mature and stable apps and also for load testing.
  2. Data driven just means capture/replay with varied inputs.  It’s a little more complicated but much better than capture/replay, where you either don’t adequately test various input scenarios or you have a billion recorded scripts.
  3. Table driven is where the actions themselves are defined as data as well – one step short of hardcoding a test app.  But (assuming you’re using a tool that can work on multiple apps) portable across apps…

I guess writing pure code to test is an unspoken #4, but unspoken because she (the author) doesn’t think that’s a real good option.

There’s a bunch of other stuff in there too, it’s a great introduction to the hows and whys and gotchas of test automation.  Give it a read!

Leave a comment

Filed under DevOps

Velocity and DevOpsDays!

A double threat is coming your way.  Velocity 2010, the Web performance and operations conference, is June 22-24 in Santa Clara, CA.  As one of the very few conventions targeted at our discipline, we’ve been attending since the first one in 2008.  And this time, there’s dessert – the day after it ends, a new DevOps unconference, DevOpsDays 2010, will be held nearby in Mountain View!

OpsCamp Austin kicked ass, and I’m sure this will be even better.  So come double up on Ops knowledge and meet other right-thinking individuals.

If you want to read all my musings from the previous Velocity conferences, you can do that too!

Leave a comment

Filed under DevOps

CloudCamp Austin Is Soon!

Mark your calendars; Thursday of next week (June 10) is CloudCamp here in Austin!  It’s in North Austin at Pervasive’s offices (Riata Trace) from 5:30-10:00 PM.  Get details and sign up here.

Leave a comment

Filed under Cloud

Busting the Myths of Agile Development: What People are Really Doing

I just watched a good Webcast from an IBM agile expert about the state of Agile in the industry, and it had some interesting bits that touch upon agile operations.

Webcast – (registration required, sadly)

It’s by Scott Ambler, IBM’s practice leader for agile development.  They surveyed programmers using Dr. Dobbs’ “IT State of the Union” survey, which has wide reach across the world and types of programmers; the data in this presentation comes from that and other surveys.  All their surveys and results and even detailed response data (they’ve done a lot over time) are online for your perusal.

In the webcast, he talks some about “core” agile development extending to “disciplined” agile development that extends to address the full system lifecycle, is both risk and value driven, and has governance and standards.  “Core” begs the question of “where do requirements and architecture come from?” and “how do I get into production?”   Some agile folks who consider themselves purists say these aren’t needed and you should just start coding, he calls this view “phenomenally naive.”

He does mention some times where things like enterprise architecture were slow and introduced huge delays into the process because it took months to do reviews and signoffs.  He calls this “dysfunctional” but really isn’t it the way things are unless there’s a pattern to change it? I think project management and enterprise architecture are suffering from the same problem we operations folks are, which is that we’re just now figuring out “what does it mean to incorporate agile concepts into what we do?”

The meat of the preso is over agile team metrics and busting myths about “it’s only for small colocated teams…”  Here’s my summary notes.

  • Agile teams have a success rate higher than traditional teams.  Agile and iterative about tie and come in with much higher (2-4x) result on quality, functionality, cost efficiency, and timeliness than traditional or ad hoc processes.
  • Agile is for not just coding, but project selection/initiation, transition, ops, and maintenance.  More and more folks are doing that, but some get stuck with doing agile only in the coding and not the surrounding part of the lifecycle.
  • Agile is being used by co-located teams in only 45% of the time; all the rest are distributed in some manner.  But that does affect success some  – “far located” teams have 20% lower success than fully co-located.  Don’t distribute if you don’t have to.
  • Most orgs are using agile with small teams, the vast majority are size 10 or less.  But they see success with even the very large teams.
  • We’re seeing people successful with agile under complaince frameworks liek Sarbox, and governance frameworks like ISO – even ITIL is mentioned.
  • Agile isn’t just for simple projects – the real mix in the wild is actually weighted at medium to very complex projects.
  • Though agile is great for greenfield projects, there’s very large percentages of teams using it on legacy environments.  COTS development is the rarest.
  • 32% of successful agile teams are working with enterprise architecture and operations teams.  Should be more, but that’s a significant inroad.  He says those teams are also most successful when behaving agile (or at least lean).
  • Biggest problems with agile adoption are a waterfall culture (especially one where the overall governance everyone has to plug into is tuned to waterfall) and stakeholder involvement.  Testers say “We need a detailed spec before we can start testing…”  DBAs say “Developers can’t code until we have a complete data model…”  Management resistance is actually the lowest obstacle (14% of respondents)!

A lot of nice stats.  The two biggest takeaways are “agile isn’t just for certain kinds of projects, it’s being used for more than that and is successful in many different areas” and “agile is for the entire lifecycle not just coding.”  As advocates of agile systems, I think that’s a good sign that the larger agile community is wandering our way as we’re building up our conception of DevOps and wandering their way ourselves!

Leave a comment

Filed under DevOps