Tag Archives: monitoring

Monitoring Survey

James Turnbull (@kartar) has this year’s monitoring survey up, so am reposting his call for participants…

TL;DR – Please take the 2015 Monitoring Survey at
https://www.surveymonkey.com/s/monitoringsurvey2015.

Last year I ran a monitoring survey, whose data I also reviewed as a
series of posts on [my] blog
(http://kartar.net/2014/11/monitoring-survey—background/). I was
interested in running the survey because I think we’re seeing the
beginnings of a significant change in the maturity of the monitoring
landscape and I’d like to track that change.

I’ve decided to make the survey a yearly event and am coinciding the
launch of this year’s survey with Monitorama in Portland.

The survey takes about 5 minutes to fill out and the results will again
be presented on this blog, in some conference talks and made available
as Creative Commons licensed data. The survey is totally anonymous and
the data won’t be used for any commercial purposes.

You can find the survey here –
https://www.surveymonkey.com/s/monitoringsurvey2015.

In related news, if you can’t be at Monitorama try to watch along at http://monitorama.com/#watch!

Leave a comment

Filed under Monitoring

An article I wrote for InfoWorld’s New Tech Forum on all the various monitoring techniques: Know your options for infrastructure monitoring

Leave a comment

by | July 3, 2014 · 2:43 pm

Meet The Agile Admins At Velocity/DevOpsDays Silicon Valley!

Three of the four agile admins (James, Karthik, and myself) will be out at Velocity and DevOpsDays this week. Say hi if you see us!

James will be doing a workshop with Gareth Rushgrove on Tuesday 9-10:30 AM, “Battle-tested Code without the Battle – Security Testing and Continuous Integration.” Get hands on with gauntlt and other tools! [Conference site] [Lanyrd]

Ernest is doing a 5 minute sponsor keynote on Thursday, “A 5 Minute Checklist for Application Monitoring.” OK, so it’s during the USA vs Germany game – come see me anyway!  I hate keynote sales pitches so I’m not doing one, I’ll be talking about a Lean approach to monitoring and stuff to cover in your MVP. There’s a free white paper too since what can you really say in 5 minutes? And so you know what to expect, the hashtag you’ll want to use is #getprobed! [Conference site] [Lanyrd]

Leave a comment

Filed under Conferences

Monitoring and the State of DevOps

If you haven’t read the new  2014 State of DevOps Report from Puppet Labs and other luminaries, check it out now!

I also pulled out some of their findings on monitoring to inspire a post for the Copperegg blog, Monitoring and the State of DevOps, which I thought I’d mention here too.

Leave a comment

Filed under DevOps, Monitoring

Filtering Your Datadog Event Stream

At both NI and Bazaarvoice I was a Datadog user; I wrote a piece for them on filtering the event stream that has just been published on the Datadog blog.  Check it out!

Leave a comment

Filed under DevOps

Velocity 2013 Day 2 Liveblog: Performance Troubleshooting Methodology

Stop the Guessing: Performance Methodologies for Production Systems

Slides are on Slideshare!

Brendan Gregg, Joyent

Note to the reader – this session ruled.

He’s from dtrace but he’s talking about performance for the rest of us. Coming soon, Systems Performance: Enterprises and the Cloud book.

Performance analysis – where do I start and what do I do?  It’s like troubleshooting, it’s easy to fumble around without a playbook. “Tools” are not the answer any more than they’re the answer to “how do I fix my car?”

Guessing Methodologies and Not Guessing Methodologies (Former are bad)

Anti-Methods

Traffic light anti-method

Monitors green?  You’re fine. But of course thresholds are a coarse grained tool, and performance is complex.  Is X bad?  Well sometimes, except when X, but then when Y, but…” Flase positives and false negatives abound.

You can improve it by more subjective metrics (like weather icons) – onjective is errors, alerts, SLAs – facts.

see dtrace.org status dashboard blog post

So traffic light is intuitive and fast to set up but it’s misleading and causes thrash.

Average anti-method

Measure the average/mean, assume a normal-like unimodal distribution and then focus your investigation on explaining the average.

This misses multiple peaks, outliers.

Fix this by adding histograms, density plots, frequency trails, scatter plots, heat maps

Concentration game anti-method

Pick a metric, find another that looks like it, investigate.

Simple and can discover correlations, but it’s time consuming and mostly you get more symptoms and not the cause.

Workload characterization method

Who is causing the load, why, what, how. Target is the workload not the performance.

lets you eliminate unnecessary work. Only solves load issues though, and most things you examine won’t be a problem.

[Ed: When we did our Black Friday performance visualizer I told them “If I can’t see incoming traffic on the same screen as the latency then it’s bullshit.”]

USE method

For every resource, check utilization, saturation, errors.

util: time resource busy

sat: degree of queued extra work

Finds your bottlenecks quickly

Metrics that are hard to get become feature requests.

You can apply this methodology without knowledge of the system (he did the Apollo 11 command module as an example).

See the use method blog post for detailed commands

For cloud computing you also need the “virtual” resource limits – instance network caps. App stuff like mutex locks and thread pools.  Decompose the app environment into queueing systems.

[Ed: Everything is pools and queues…]

So go home and for your system and app environment, create a USE checklist and fill out metrics you have. You know what you have, know what you don’t have, and a checklist for troubleshooting.

So this is bad ass and efficient, but limited to resource bottlenecks.

Thread State Analysis Method

Six states – executing, runnable, anon paging, sleeping, lock, idle

Getting this isn’t super easy, but dtrace, schedstats, delay accounting, I/O accounting, /proc

Based on where the time is leads to direct actionables.

Compare to e.g. database query time – it’s not self contained. “Time spent in X” – is it really? Or is it contention?

So this identifies, quantifies, and directs but it’s hard to measure all the states atm.

There’s many more if perf is your day job!

Stop the guessing and go with ones that pose questions and seek metrics to answer them.  P.S. use dtrace!

dtrace.org/blogs/brendan

Leave a comment

Filed under Conferences, DevOps

Velocity 2013 Day 1 Liveblog – Bringing the Noise

Next up it’s the Etsy Crew!  A great bunch of guys.  Rembetsy is cutely nervous and proud about his guys presenting. Slides are available here!

And the topic is Bring the Noise: Making Effective Use of a Quarter Million Metrics by @abestanway and @jonlives. Anomaly detection is hard…

At Etsy we want to deploy lots – we have 250 committers, everyone has to deploy code, coder or not. Big “deploy to production” button. 30 deploys/day.

How can we control that kind of pace? Instead of fearing error, we put in the means to detect and recover quickly.

They use ganglia, graphite, and nagios – and they wrote statsd, supergrep, skyline, and oculus as well.

First line of defense – node daemon tailing log files and looking for errors using supergrep.

But not everything throws errors. 😦

So they use statsd to collect zillions of metrics and put them onto dashboards. But dashboards are manually curated “what’s important” – and if you have .25M metrics you just can’t do that.  So the dashboard approach has fallen over here.  And if no one’s watching the graph, why do you have it?

So that’s why Satan invented Nagios, to alert when you can’t look at a graph, but again it breaks down at scale.

Basically you have unknown anomalies and unknown correlations.

They have “kale,” their monitoring stack to try to solve this – skyline solves anomaly detection and oculus solves metrics correlation.

Skyline

A realtime anomaly detection system (where realtime means ~90s). They have a 10s flush on statsd and a 1 min res on ganglia so that’s still fast.

They had to do this in memory and not disk, using Redis. But how to stream them all in?  They looked around and realized that the carbon-relay on graphite could be used to fork into it by pretending it’s another backup graphite destination.

They import from ganglia too via graphite reading its RRDs. Skyline also has other listeners.

To store time-series data in Redis, minimizing I/O and memory… redis.append() is constant time.

Tried to store in JSON but that was slow (half the CPU time was decoding JSON).

Found Messagepack, a binary-based serialization protocol. Much faster.

So they keep appending, but had to have a process go through and clean up old data past the defined duration. Hence “roomba.py.” Python because of all the good stats libraries. They just keep 24 hours of operational data.

But so what is an anomaly and how do you detect it?

Skyline uses the consensus model. [Ed. This is a common way of distinguishing sensor faults from process faults in real-world engineering.]

Using statistical process control – a metric is anomanouls if its latest datapoint is over three standard deviations above its moving average.

Use “Grubb’s test” and “ordinary least squares”… OK, most of the crowd is lost now. Histogram binning.

Problems – seasonality, spike influence (big spike biases average masking smaller spikes), normality (stddev is for normal distributions, and most data isn’t normal), and parameters. They are trying to further their algorithms.

OK, how about correlations?

Oculus does this.  Can we just compare the graphs? Image comparison is expensive and slow. Numerical comparison is not a hard problem.

“Euclidean Distance” is the most basic comparison of two time series. Dynamic Time Warping helps with phase shifts from time. But that’s expensive – O(n^2).

So how can we discard “obviously” dissimilar data?  Use a shape description alphabet – “basically flat, sharp increment,” etc.  Apply to graphs, cluster using elasticsearch, run dynamic time algorithm on that smaller sample size to polish it. But that’s still slow.  Luckily there’s a fast DTW variant that’s O(n).

So they do an elastic search phrase query with a high slop against the shape fingerprints.
Populate elastic search from redis using resque workers, but it makes it slow to update and search. Solved with rotating pool of elastic search servers – new index/last index. Allows you to purge the index and reindex. They cron-rotate every 2 min. Takes 25s to import, but queries take a while and you don’t want to rotate out from under it.
Sinatra frontend to query ES and render results off the live ES index.

Save collections of interesting correlations and then index those, so that later searches match against current data but also old fingerprints.

Devops is the key to us being able to do this. Abe the dev and Jon the ops guy managed to work all this out in a pretty timely manner.

Demo: Draw your query! He schetched a waveform and it found matching metrics -nice.

Leave a comment

Filed under Conferences, DevOps