Monthly Archives: November 2011

Why Your Testing Is Lying To You

As a follow-on to Why Your Monitoring Is Lying To You.  How is it that you can have an application go through a whole test phase, with two-day-long load tests, and have surprising errors when you go to production?  Well, here’s how…  The same application I describe in the case study part of the monitoring article slipped through testing as well and almost went live with some issues. How, oh how could this happen…

I Didn’t See Any Errors!

Our developers quite reasonably said “But we’ve been developing and using this app in dev and test for months and haven’t seen this problem!” But consider the effects at work in But, You See, Your Other Customers Are Dumber Than We Are. There are a variety of levels of effect that prevent you from seeing intermittent problems, and confirmation bias ends up taking care of the rest.

The only fix here is rigor. If you hit your application and test and it errors, you can’t just ignore it. “I hit reload, it worked.  Maybe they were redeploying. On with life!” Maybe it’s your layer, maybe it’s another layer, it doesn’t matter, you have to log that as a bug and follow up and not just cancel the bug as “not reproducible” if you don’t see it yourself in 5 minutes of trying.  Devs sometimes get frustrated with us when we won’t let up on occurrences of transient errors, but if you don’t know why they happened and haven’t done anything to fix it, then it’s just a matter of time before it happens again, right?

We have a strict policy that every error is a bug, and if the error wasn’t detected it is multiple bugs – a bug with the monitoring, a bug with the testing, etc. If there was an error but “you don’t know why” – you aren’t logging enough or don’t have appropriate tools in place, and THAT’s a bug.

Our Load Test/Automated Tests Didn’t See Any Errors!

I’ll be honest, we don’t have much in the way of automated testing in place. So there’s that.  But we have long load tests we run.  “If there are intermittent failures they would have turned up over a two day load test right?” Well, not so fast. How confident are you this error is visible to and detected by your load test?  I have seen MANY load test results in my lifetime where someone was happily measuring the response time of what turned out to be 500 errors.  “Man, my app is a lot faster this time!  The numbers look great! Wait… It’s not even deployed. I hit it manually and I get a Tomcat page.”

Often we build deliberate “lies” into our software. We throw “pretty” error pages that aren’t basic errors. We are trying not to leak information to customers so we bowderlize failures on the front end. We retry maniacally in the face of failed connections, but don’t log it. We have to use constrained sets of return codes because the client consuming our services (like, say, Silverlight) is lobotomized and doesn’t savvy HTTP 401 or other such fancy schmancy codes.

Be careful that load tests and automated tests are correctly interpreting responses.  Look at your responses in Fiddler – we had what looked to the eye to be a 401 page that was actually passing back a 200 HTTP return code.

The best fix to this is test driven development.  Run the tests first so you see them fail, then write the code so you see them work!  Tests are code, and if you just write them on your working code then you’re not really sure if they’ll fail if somethings bad!

Fault Testing

Also, you need to perform positive and negative fault testing. Test failures end to end, including monitoring and logging and scaling and other management stuff. At the far end of this you have the cool if a little crazy Chaos Monkey.  Most of us aren’t ready or willing to jack up our production systems regularly, but you should at least do it in test and verify both that things work when they should and that they fail and you get proper notification and information if they do.

Try this.  Have someone Chaos Monkey you by turning off something random – a database, making a file system read only, a back end Web service call.  If you have redundancy built in to counter this, great, try it with one and see the failover, but then have them break “all of it” to provoke a failure.  Do you see the failure and get alerted? More importantly, do you have enough information to tell what they broke?  “One of the four databases we connect to” is NOT an adequate answer. Have someone break it, send you all the available logs and info, and if you can’t immediately pinpoint the problem, fix that.

How Complex Systems Fail, Invisibly

In the end, a lot of this boils down to How Complex Systems Fail. You can have failures at multiple levels – and not really failures, just assumptions – that stack on top of each other to both generate failures and prevent you from easily detecting those failures.

Also consider that you should be able to see those “short of failure” errors.  If you’re failing over, or retrying, or whatnot – well it’s great that you’re not down, but don’t you think you should know if you’re having to fail over 100x more this week?  Log it and turn it into a metric. On our corporate Web site, there’s hundreds of thousands of Web pages, so a certain level of 404s is expected.  We don’t alert anyone on a 404.  But we do metricize it and trend it and take notice if the number spikes up (or down – where’d all that bad content go?).

Whoelsale failures are easy to detect and mitigate.  It’s the mini-failures, or things that someone would argue are not a failure, on a given level that line up with the same kinds of things on all the other layers and those lined up holes start letting problems slip through.

http://www.codinghorror.com/blog/2011/04/working-with-the-chaos-monkey.html

2 Comments

Filed under DevOps

Turbo-Charging Your Product Development

Bruce Rayner from Enterprise Efficiency did this nice writeup on how we leveraged the IT expertise within our company to jump start our SaaS product development here at NI. Our whole team except for our manager worked at some point within NI IT, but are now doing product development in R&D. I know to smaller companies it seems like not so much of a big deal, but in larger companies it can be hard to maintain a culture where you really can decide what you want to do, establish its priority, and then put the right kinda and type of resource on it despite organizational boundaries. Organizational agility FTW!

Leave a comment

Filed under Cloud

Why Your Monitoring Is Lying To You

In my Design for Failure article, I mentioned how many of the common techniques we use to allegedly detect failure really don’t.  This time, we’ll discuss your monitoring and why it is lying to you.

Well, you have some monitoring, don’t you, couldn’t it tell you if an application is down? Obviously not if you are just doing old SNMP/box level monitoring, but you’re all DevOps and you know you have to monitor the applications because that’s what counts. But even then, there are common antipatterns to be aware of.

Synthetic Monitoring

Dirty secret time, most application monitoring is “synthetic,” which means it hits a specific URL or set of URLs once in a while, often 5-10 minutes apart. Also, since there are a lot of transient failures out there on the Internet, most ops groups have their monitors set where they have to see 2-5 consecutive failures – because ops teams don’t like being woken up at 3 AM because an application hiccuped once (or the Internet hiccuped on the way to the application). If the problem happens on only 1 of every 20 hits, and you have to see three errors in a row to alert, then I’ll leave it to your primary school math skills to determine how likely it is you’ll catch the problem.

You can improve on this a little bit, but in the end synthetic monitoring is mainly useful for coarse uptime checking and performance trending.

Metric Monitoring

OK, so synthetic monitoring is only good for rough up/down stuff, but what about my metric monitoring? Maybe I have a spiffier tool that is continuously pulling metrics from Web servers or apps that should give me more of a continuous look.  Hits per second over the last five minutes; current database space, etc.

Well, I have noticed that metrics monitors, with startling regularity, don’t really tell you if something is up or down, especially historically. If you pull current database space and the database is down, you’d think there would be a big nasty gap in your chart but many tools don’t do that – either they report the last value seen, or if it’s a timing report it happily reports you timing of errors. Unless you go to the trouble to say “if the thing is down, set a value of 0 or +infinity or something” then you can sometimes have a failure, then go back and look at your historical graphs and see no sign there’s anything wrong.

Log Monitoring

Well surely your app developers are logging if there’s a failure, right? Unfortunately logging is a bit of an art, and the simple statement “You should log the overall success or failure of each hit to your app, and you should log failures on any external dependency” can be… reinterpreted in many ways. Developers sometimes don’t log all the right things, or even decide to suppress certain logs.

You should always log everything.  Log it at a lower log level, like INFO, if it’s routine, but then at least it can be reviewed if needed and can be turned into a metric for trending via tools like Splunk. My rules are simple:

  • Log the start and end of each hit – are you telling the client success or failure? Don’t rely on the Web server log.
  • Log every single hit to an external dependency at INFO
  • Log every transient failure at WARN
  • Log every error at ERROR

Real User Monitoring

Ah, this is more like it.  The alleged Holy Grail of monitoring is real user monitoring, where you passively look at the transactions coming in and out and log them.  Well, on the one hand, you don’t have to rely on the developers to log, you can log despite them.  But you don’t get as much insight as you’d think. If the output from the app isn’t detectable as an error, then the monitoring doesn’t help.  A surprising amount of the time, failures are not thrown as a 500 or other expected error code. And checking for content within a payload is often fragile.

Also, RUM tools tend to be network sniffer based, which don’t work well in the cloud or in many network topologies.  And you get so much data, that it can be hard to find the real problems without spending a lot of time on it.

No, Really – One Real World Example

We had a problem just this week that managed to successfully slip through all our layers of monitoring – luckily, our keen eyes caught it in preproduction. We had been planning a bit app release and had been setting up monitoring for it. It seemed like everything was going fine. But then the back end databases (SQL Azure in this case) had a pretty long string of failures for about 10 minutes, which brought our attention to the issue. As I looked into it, I realized that it was very likely we would have seen smaller spates of SQL Azure connection issues and thus application outage before – why hadn’t we?  I investigated.

We don’t have any good cloud-compliant real user monitoring in place yet.  And the app was throwing a 200 http code on an error (the error page displayed said 401, but the actual http code was 200) so many of our synthetic monitors were fooled. Plus, the problem was usually occasional enough that hitting once every 10 minutes from Cloudkick didn’t detect it. We fixed that bad status code, and looked at our database monitors. “I know we have monitors directly on the databases, why aren’t those firing?”

Our database metric monitors through Cloudkick, I was surprised to see, had lovely normal looking graphs after the outage.I provoked another outage in test to see, and sure enough, though the monitors ‘went red,’ for some reason they were still providing what seemed to Cloudkick like legitimate data points, and once the monitors “went green,” nothing about any of the metric graphs indicated anything unusual! In other words, the historical graphs had legitimate looking data and did not reveal the outage. That’s a big problem. So we worked on those monitors.

I still wanted to know if this had been happening.  “We use Splunk to aggregate our logs, I’ll go look there!” Well, there were no error lines in the log that would indicate a back end database problem. Upon inquiring, I heard that since SQL Azure connection issues are a known and semi-frequent problem, logging of them is suppressed, since we have retry logic in place.  I recommended that we log all failures, with ones that are going to be retried simply logged at a lower severity level like WARN, but ERROR on failures after the whole spread of retries. I declared this a showstopper bug that had to be fixed before release – not everyone was happy with that, but sometimes DevOps requires tough love.

I was disturbed that we could have periods of outage that were going unnoticed despite our investment in synthetic monitoring, pulling metrics, and searching logs. When I looked back at all our metrics over periods of known outage and they all looked good, I admit I became somewhat irate. We fixed it and I’m happy with our detection now, but I hope this is instructive in showing you how bad assumptions and not fully understanding the pros and cons of each instrumentation approach can end up leaving “stacked holes” that end up profoundly compromising your view of your service!

10 Comments

Filed under DevOps

DevOps Tip: Design for Failure

We have had some interesting  internal discussions lately about application reliability.  It’s probably not a surprise to many of you that the cloud is unreliable, on a small scale that is.  Sure, on the large scale you use the cloud to make highly resilient environments. But a certain percentage of calls to the cloud fail – whether it’s Amazon’s or Azure’s management APIs, or hitting Amazon or Azure storage, or going through an Amazon ELB, or hitting SQL Azure. Heck, on Azure they plainly state that they will pull your instances out from under you, restart them, and move them to other hardware without notice. If you’re running 2 or more, they won’t do them all at the same time – so again, you get large scale resilience but at the cost of some small scale unreliability.

The problem is, that people sometimes come from the assumption that their application is always working fine, unless you can prove otherwise. This is fundamentally the wrong assumption. You have to assume your application has problems, unless you can prove it doesn’t.  This changes your approach to testing, logging, and monitoring profoundly.

Take the all too common example of an app with intermittent failures. Let’s say it’s as bad as 1 in 20 times.  1 in 20 times a customer hits your application, it fails somehow. It is very likely you don’t know this. Because by default, you don’t know it. How would you? Well, obviously, by monitoring, logging, and testing. I’ll follow this up with a series of posts describing how and why those often fail to detect problems. The short form is that “ha ha, no they don’t.”

Here’s a bad story I’ll tell on myself.  Here at NI, we rolled out a PDF instant quote generation widget.  We have over 250 apps on ni.com, so we don’t put synthetic monitors on all of them (remind me to tell you about the time early at NI that I discovered synthetic monitoring was producing 30% of our site load). Apparently the logging wasn’t all that good either, it didn’t trigger any of our log monitoring heuristics. Anyway, come to find out later on that the app was failing in production about 75% of the time. This is an application on a “monitored” site, where a developer and a tester signed off on the app. Whoops.  If you do a cursory test and assume it’ll work – well you know what they say about assumptions – they make an ass out of “you” and “mption.” 🙂

Anyway, to me part of the good part about the cloud is that they come out and say “we’re going to fail 2-5% of the time, code for it.” Because before the cloud, there were failures all the time too, but people managed to delude themselves into thinking there weren’t; that an application (even a complex Internet-based application) should just work, hit after hit, day after day, on into the future. So by having handling failure built in – like a lesser version of the Chaos Monkey – you’re not really just making your app cloud friendly, you’re making it better.

Real engineers who make cars and whatnot know better. That’s why there was a big ol’ maintenance hatch on the side of the Hubble Space Telescope; if any of you have watched the Hubble 3D IMAX film you get to see them performing maintenance on it.  If a billion dollar telescope in fricking space has problems and needs to be maintainable, so does your little Web app.

But I see so many apps that don’t really take failure into account.  Oh, maybe they retry some connections if they fail. But what if you get to the end of your retries? What if the response you get back is an unexpected HTTP code or unexpected payload? You’d think in the age of try/catch and easily integrated logging frameworks you wouldn’t see this any more, but I see it all the time. It’s a combination of not realizing that failure is ubiquitous, and not thinking about the impact (especially the customer facing impact) of that failure.

This is one of the (many) great DevOps learning experiences – Ops helping teach Devs all the things that can go wrong that don’t really go wrong much in a “frictionless” lab environment.  “So, what do you do if your hard drive is suddenly not there?” (Common with Amazon EBS failures.)  “What do you do if you took data off that queue and then your instance restarts before you put it into the database?” (Hopefully a transaction.) “What do you do if you can’t make that network connection, are you retrying every 5 ms and then filling up the system’s TCP connections?” (True story.) “Hey, I’m sure your app is pure as the driven snow right now, but is it always going to work the same when the PaaS vendor changes the OS version under you?”

In all circumstances, you should

  • Plan for failure (understand failure modes, retry, design for it)
  • Detect failure (monitor, log, etc.)
  • Plan for and detect failure of your schemes to plan for and detect failure!

We do some security threat modeling here. I wonder if there’s not a lightweight methodology like that which could be readily adapted for reliability modeling of apps.  Seems like something someone would have done… But a simple one, not like lame complicated risk matrices. I’ll have to research that.

Leave a comment

Filed under DevOps