Category Archives: Uncategorized

Up and running with Vagrant

Last night I gave a 5 minute lightning presentation on Vagrant for the Austin Cloud User Group’s December meeting which was aptly titled “The 12 Clouds of Christmas.”  These ’12 clouds’ fleshed out into 12 lightning talks on different clouds and implementation thereof.  The format was great and thought that it allowed everyone to get exposure to new tech.

Below are the slides from the demo.  Slides 9 and 10 are where I showed the actual setup (Vagrantfile, rvm), virtual box console and ran vagrant commands.  Squint real hard and tilt your head to the right and maybe you can envision the actual demo portion of the talk…  Or if your imagination fails you, you can watch some random vagrant demos on youtube.

Leave a comment

Filed under Uncategorized

Amazon EC2 EBS Instances and Ephemeral Storage

Here’s a couple tidbits I’ve gleaned that are useful.

When  you start an “instance-store” Amazon EC2 instance, you get a certain amount of ephemeral storage allocated and mounted automatically.  The amount of space varies by instance size and is defined here.  The storage location and format also varies by instance size and is defined here.

The upshot is that if you start an “instance-store” small Linux EC2 instance, it automagically has a free 150 GB /mnt disk and a 1 GB swap partition up and runnin’ for ya.  (mount points vary by image, but that’s where they are in the Amazon Fedora starter.)

[root@domU-12-31-39-00-B2-01 ~]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1             10321208   1636668   8160252  17% /
/dev/sda2            153899044    192072 145889348   1% /mnt
none                    873828         0    873828   0% /dev/shm
[root@domU-12-31-39-00-B2-01 ~]# free
total       used       free     shared    buffers     cached
Mem:       1747660      84560    1663100          0       4552      37356
-/+ buffers/cache:      42652    1705008
Swap:       917496          0     917496

But, you say, I am not old or insane!  I use EBS-backed images, just as God intended.  Well, that’s a good point.  But when you pull up an EBS image, these ephemeral disk areas are not available to you.  The good news is, that’s just by default.

The ephemeral storage is still available and can be used (for free!) by an EBS-backed image.  You just have to set the block devices up either explicitly when you run the instance or bake them into the image.

Runtime:

You refer to the ephemeral chunks as “ephemeral0”, “ephemeral1”, etc. – they don’t tell you explicitly which is which but basically you just count up based on your instance type (review the doc).  For a small image, it has an ephemeral0 (ext3, 15 GB) and an ephemeral1 (swap, 1 GB).  To add them to an EBS instance and mount them in the “normal” places, you do:

ec2-run-instances <ami id> -k <your key> --block-device-mapping '/dev/sda2=ephemeral0'
--block-device-mapping '/dev/sda3=ephemeral1'

On the instance you have to mount them – add these to /etc/fstab and mount -a or do whatever else it is you like to do:

/dev/sda3                 swap                    swap    defaults 0 0
/dev/sda2                 /mnt                    ext3    defaults 0 0

And if you want to turn the swap on immediately, “swapon /dev/sda3”.

Image:

You can also bake them into an image.  Add a fstab like the one above and when you create the image, do it like this, using the exact same –block-device-mapping flag:

ec2-register -n <ami id> -d "AMI Description" --block-device-mapping  /dev/sda2=ephemeral0
--block-device-mapping '/dev/sda3=ephemeral1' --snapshot your-snapname --architecture i386
--kernel<aki id>  --ramdisk <ari id>

Ta da. Free storage that doesn’t persist.  Very useful as /tmp space.  Opinion is split among the Linuxerati about whether you want swap space nowadays or not; some people say some mix of  “if you’re using more than 1.8 GB of RAM you’re doing it wrong” and “swapping is horrid, just let bad procs die due to lack of memory and fix them.”  YMMV.

Ephemeral EBS?

As another helpful tip, let’s say you’re adding an EBS to an image that you don’t want to be persistent when the instance dies.  By default, all EBSes are persistent and stick around muddying up your account till you clean them up.   If you don’t want certain EBS-backed drives to persist, what you do is of the form:

ec2-modify-instance-attribute --block-device-mapping "/dev/sdb=vol-f64c8e9f:true" i-e2a0b08a

Where ‘true’ means “yes, please, delete me when I’m done.”  This command throws a stack trace to the tune of

Unexpected error: java.lang.ClassCastException: com.amazon.aes.webservices.client.InstanceBlockDeviceMappingDescription
cannot be cast to com.amazon.aes.webservices.client.InstanceBlockDeviceMappingResponseDescription

But it works, that’s just a lame API tools bug.

8 Comments

Filed under Cloud, Uncategorized

Defining Agile Operations and DevOps

I recently read a great blog post by Scott Wilson that was talking about the definitions of Agile Operations, DevOps, and related terms.  (Read the comments too, there’s some good discussion.)  From what I’ve heard so far, there are a bunch of semi-related terms people are using around this whole “new thing of ours.”

The first is DevOps, which has two totally different frequently used definitions.

1.  Developers and Ops working closely together – the “hugs and collaboration” definition

2.  Operations folks uptaking development best practices and writing code for system automation

The second is Agile Operations, which also has different meanings.

1.  Same as DevOps, whichever definition of that I’m using

2.  Using agile principles to run operations – process techniques, like iterative development or even kanban/TPS kinds of process stuff.  Often with a goal of “faster!”

3.  Using automation – version control, automatic provisioning/control/monitoring.  Sometimes called “Infrastructure Automation” or similar.

This leads to some confusion, as most of these specific elements can be implemented in isolation.  For example, I think the discussion at OpsCamp about “Is DevOps an antipattern” was predicated on an assumption that DevOps meant only DevOps definition #2, “ops guys trying to be developers,” and made the discussion somewhat odd to people with other assumed definitions.

I have a proposed set of definitions.  To explain it, let’s look at Agile Development and see how it’s defined.

Agile development, according to wikipedia and the agile manifesto, consists of a couple different “levels” of thing.  To sum up the wikipedia breakdown,

  • Agile Principles – like “business/users and developers working together.”  These are the core values that inform agile, like collaboration, people over process, software over documentation, and responding to change over planning.
  • Agile Methods – specific process types.  Iterations, Lean, XP, Scrum.  “As opposed to waterfall.”
  • Agile Practices – techniques often found in conjunction with agile development, not linked to a given method flavor, like test driven development, continuous integration, etc.

I believe the different parts of Agile Operations that people are talking about map directly to these three levels.

  • Agile Operations Principles includes things like dev/ops collaboration (DevOps definition 1 above); things like James Turnbull’s 4-part model seem to be spot on examples of trying to define this arena.
  • Agile Operations Methods includes process you use to conduct operations – iterations, kanban, stuff you’d read in Visible Ops; Agile Operations definition #2 above.
  • Agile Operations Practices includes specific techniques like automated build/provisioning, monitoring, anything you’d have a “toolchain” for.  This contains DevOps definition #2 and Agile Operations definition #3 above.

I think it’s helpful to break them up along the same lines as agile development, however, because in the end some of those levels should merge once developers understand ops is part of system development too…  There shouldn’t be a separate “user/dev collaboration” and “dev/ops collaboration,” in a properly mature model it should become a “user/dev/ops collaboration,” for example.

I think the dev2ops guys’ “People over Process over Tools” diagram mirrors this about exactly – the people being one of the important agile principles, process being a large part of the methods, and tools being used to empower the practices.

What I like about that diagram, and why I want to bring this all back to the Agile Manifesto discussion, is that the risk of having various sub-definitions increases the risk that people will implement the processes or tools without the principles in mind, which is definitely an antipattern.  The Agile guys would tell you that iterations without collaboration is likely to not work out real well.

And it happens in agile development too – there are some teams here at my company that have adopted the methods and/or tools of agile but not its principles, and the results are suboptimal.

Therefore I propose that “Agile Operations” is an umbrella term for all these things, and we keep in mind the principles/methods/practices differentiation.

If we want to call the principles “devops” for short and some of the practices “infrastructure automation” for short I think that would be fine…   Although dev/ops collaboration is ONE of the important principles – but probably not the entirety; and infrastructure automation is one of the important practices, but there are probably others.

2 Comments

Filed under DevOps, Uncategorized

A Case For Images

After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article “Golden Image or Foil Ball?“, I was thinking pretty hard about the use of images in our new automated infrastructure.  He’s pretty against them.  After careful consideration, however, I think judicious use of images is the right thing to do.

My top level thoughts on why to use images.

  1. Speed – Starting a prebuilt image is faster than reinstalling everything on an empty one.  In the world of dynamic scaling, there’s a meaningful difference between a “couple minute spinup” and a “fifteen minute spinup.”
  2. Reliability – The more work you are doing at runtime, the more there is to go wrong.  I bet I’m not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of ’em.  And the more stuff you’re pulling to build your image, the more failure points you have.
  3. Flexibility – Dynamically building from stem cell kinda makes sense if you’re using 100% free open source and have everything automated.  What if, however, you have something that you need to install that just hasn’t been scripted – or is very hard to script?  Like an install of some half-baked Windows software that doesn’t have a command line installer and you don’t have a tool that can do it?  In that case, you really need to do the manual install in non-realtime as part of a image build.  And of course many suppliers are providing software as images themselves nowadays.
  4. Traceability – What happens if you need to replicate a past environment?  Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons.  “I keep a bunch of old software repo versions so I can mostly build a machine like it” – somewhat less so.

In the end, it’s a question of using intermediate deliverables.  Do you recompile all the code and every third party package every time you build a server?  No, you often use binaries – it’s faster and more reliable.  Binaries are the app guys’ equivalent of “images.”

To address Luke’s three concerns from his article specifically:

  1. Image sprawl – if you use images, you eventually have a large library of images you have to manage.  This is very true – but you have to manage a lot of artifacts all up and down the chain anyway.  Given the “manual install” and “vendor supplied image” scenarios noted above, if you can’t manage images as part of your CM system than it’s just not a complete CM system.
  2. Updating your images – Here, I think Luke makes some not entirely valid assumptions.  He notes that once you’re done building your images, you’re still going to have to make changes in the operational environment (“bootstrapping”).  True.  But he thinks you’re not going to use the same tool to do it.  I’m not sure why not – our approach is to use automated tooling to build the images – you don’t *want* to do it manually for sure – and Puppet/Chef/etc. works just fine to do that.  So if you have to update something at the OS level, you do that and let your CM system blow everything on top – and then burn the image.  Image creation and automated CM aren’t mutually exclusive – the only reason people don’t use automation to build their images is the same reason they don’t always use automation on their live servers, which is “it takes work.”  But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it’s a good conservation of work to use the same package for both. (Besides bootstrapping, there’s other stuff like moving content that shouldn’t go on images.)
  3. Image state vs running state – This one puzzles me.  With images, you do need to do restarts to pull in image-based changes.  But with virtually all software and app changes you have to as well – maybe not a “reboot,” but a “service restart,” which is virtually as disruptive.  Whether you “reboot  your database server” or “stop and start your database server, which still takes a couple minutes”, you are planning for downtime or have redundancy in place.  And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that “oh, pull that change whenever you want to Mr. Application Server” doesn’t really work for.

In closing, I think images are useful.  You shouldn’t treat them as a replacement for automated CM – they should be interim deliverables usually generated by, and always managed by, your automated CM.  If you just use images in an uncoordinated way, you do end up with a foil ball.  With sufficient automation, however, they’re more like Russian nesting dolls, and have advantages over starting from scratch with every box.

Leave a comment

Filed under DevOps, Uncategorized

Enterprise Systems vs. Agility

I was recently reading a good Cameron Purdy post where he talks about his eight theses regarding why startups or students can pull stuff off that large enterprise IT shops can’t.

My summary/trenchant restatement of his points:

  1. Changing existing systems is harder than making a custom-built new one (version 2 is harder)
  2. IT veterans overcomplicate new systems
  3. The complexity of a system increases exponentially the work needed to change it (versions 3 and 4 are way way harder)
  4. Students/startups do fail a lot, you just don’t see those
  5. Risk management steps add friction
  6. Organizational overhead (paperwork/meetings) adds friction
  7. Only overconservative goons work in enterprise IT anyway
  8. The larger the org, the more conflict

Though I suspect #1 and #3 are the same, #2 and #5 are the same, and #6 and #8 are the same, really.

I’ve been thinking about this lately with my change from our enterprise IT Web site to a new greenfield cloud-hosted SaaS product in our R&D organization.  It’s definitely a huge breath of fresh air to be able to move fast.  My observations:

Complexity

The problem of systems complexity (theses #1 and #3) is a very real one.  I used to describe our Web site as having reached “system gridlock.”  There were hundreds of apps running dozens to a server with poorly documented dependencies on all kinds of stuff.  You would go in and find something that looked “wrong” – an Apache config, script, load balancer rule, whatever – but if you touched it some house of cards somewhere would come tumbling down.  Since every app developer was allowed to design their own app in its own tightly coupled way, we had to implement draconian change control and release processes in an attempt to stem the tide of people lining up to crash the Web site.

We have a new system design philosophy for our new gig which I refer to as “sharing is the devil.”  All components are separated and loosely coupled.  Using cloud computing for hardware and open source for software makes it easy and affordable to have a box that does “only one thing.”  In traditional compute environments there’s pressure to “use up all that CPU before you add more”, which results in a penny wise, pound foolish strategy of consolidation.  More and more apps and functions get crunched closer together and when you go back to pull them out you discover that all kinds of new connections and dependencies have formed unbidden.

Complication

Overcomplicating systems (#2 and #5) can be somewhat overcome by using agile principles.  We’ve been delving heavily into doing not just our apps but also our infrastructure according to an agile methodology.  It surfaces your requirements – frankly, systems people often get away with implementing whatever they want, without having a spec let alone one open to review.  Also, it makes you prioritize.  “Whatever you can get done in this two week iteration, that’s what you’ll have done, and it should be working.”  It forces focus on what is required to get things to work and delays more complex niceties till later as there’s time.

Conservatism

Both small and large organizations can suffer from #6 and #8.  That’s mostly a mindset issue.  I like to tell the story about how we were working on a high level joint IT/business vision for our Web site.  We identified a number of “pillars” of the strategy we were developing – performance, availability, TCO, etc.  I had identified agility as one, but one of the application directors just wasn’t buying into it.  “Agility, that’s weird, how do we measure that, we should just forget about it.”  I finally had to take all the things we had to the business head of the Web and say “of these, which would you say is the single most important one?”  “Agility, of course,” he said, as I knew he would.  I made it a point to train my staff that “getting it done” was the most important thing, more important than risk mitigation or crossing all the t’s and dotting all the i’s.  That can be difficult if the larger organization doesn’t reward risk and achievement over conservatism, but you can work on it.

Leave a comment

Filed under DevOps, Uncategorized

OpsCamp Debrief

I went to OpsCamp this last weekend here in Austin, a get-together for Web operations folks specifically focusing on the cloud, and it was a great time!  Here’s my after action report.

The event invite said it was in the Spider House, a cool local coffee bar/normal bar.  I hadn’t been there before, but other people that had said “That’s insane!  They’ll never fit that many people!  There’s outside seating but it’s freezing out!”  That gave me some degree of trepidation, but I still racked out in time to get downtown by 8 AM on a Saturday (sigh!).  Happily, it turned out that the event was really in the adjacent music/whatnot venue also owned by Spider House, the United States Art Authority, which they kindly allowed us to use for free!  There were a lot of people there; we weren’t overfilling the place but it was definitely at capacity, there were near 100 people in attendance.

I had just heard of OpsCamp through word of mouth, and figured it was just going to be a gathering of local Austin Web ops types.  Which would be entertaining enough, certainly.  But as I looked around the room I started recognizing a lot of guys from Velocity and other major shows; CEOs and other high ranked guys from various Web ops related tool companies.  Sponsors included John Willis and Adam Jacob (creator of Chef) from Opscode , Luke Kanies from Reductive Labs (creator of Puppet), Damon Edwards and Alex Honor from DTO Solutions (formerly ControlTier), Mark Hinkle and Matt Ray from Zenoss, Dave Nielsen (CloudCamp), Michael Coté (Redmonk), Bitnami, Spiceworks, and Rackspace Cloud.  Other than that, there were a lot of random Austinites and some guys from big local outfits (Dell, IBM).

You can read all the tweets about the event if you swing that way.

OpsCamp kinda grew out of an earlier thing, BarCampESM, also in Austin two years ago.  I never heard about that, wish I had.

How It Went

I had never been to an “unconference” before.  Basically there’s no set agenda, it’s self-emergent.  It worked pretty well.  I’ll describe the process a bit for other noobs.

First, there was a round of lightning talks.  Brett from Rackspace noted that “size matters,” Bill from Zenoss said “monitoring is important,” and Luke from Reductive claimed that “in 2-4 years ‘cloud’ won’t be a big deal, it’ll just be how people are doing things – unless you’re a jackass.”

Then it was time for sessions.  People got up and wrote a proposed session name on a piece of paper and then went in front of the group and pitched it, a hand-count of “how many people find this interesting” was taken.

Candidates included:

  • service level to resolution
  • physical access to your cloud assets
  • autodiscovery of systems
  • decompose monitoring into tool chain
  • tool chain for automatic provisioning
  • monitoring from the cloud
  • monitoring in the cloud – widely dispersed components
  • agent based monitoring evolution
  • devops is the debil – change to the role of sysadmins
  • And more

We decided that so many of these touched on two major topics that we should do group discussions on them before going to sessions.  They were:

  • monitoring in the cloud
  • config mgmt in the cloud

This seemed like a good idea; these are indeed the two major areas of concern when trying to move to the cloud.

Sadly, the whole-group discussions, especially the monitoring one, were unfruitful.  For a long ass time people threw out brilliant quips about “Why would you bother monitoring a server anyway” and other such high-theory wonkery.  I got zero value out of these, which was sad because the topics were crucially interesting – just too unfocused; you had people coming at the problem 100 different ways in sound bytes.  The only note I bothered to write down was that “monitoring porn” (too many metrics) makes it hard to do correlation.  We had that problem here, and invested in a (horrors) non open-source tool, Opnet Panorama, that has an advanced analytics and correlation engine that can make some sense of tens of thousands of metrics for exactly that reason.

Sessions

There were three sessions.  I didn’t take many notes in the first one because, being a Web ops guy, I was having to work a release simultaneously with attending OpsCamp 😛

Continue reading

10 Comments

Filed under DevOps, Uncategorized

Come To OpsCamp!

Next weekend, Jan 30 2009, there’s a Web Ops get-together here in Austin called OpsCamp!  It’ll be a Web ops “unconference” with a cloud focus.  Right up our alley!  We hope to see you there.

Leave a comment

Filed under DevOps, Uncategorized

Dang, People Still Love Them Some IE6

We get a decent bit of Web traffic here on our site.  I was looking at the browser and platform breakdowns and was surprised to see IE6 still in the lead!  I’m not sure if these stats are representative of “the Internet in general” but I am willing to bet they are representative of enterprise-type users, and we get enough traffic that most statistical noise should be filtered out.  I thought I’d share this; most of the browser market share research out there is more concerned with the IE vs Firefox (vs whoever) competition aspect and less about useful information like versions.  Heck we had to do custom work to get the Firefox version numbers; our Web analytics vendor doesn’t even provide that.  In the age of more Flash and Silverlight and other fancy schmancy browser tricks, disregarding what versions and capabilites your users run is probably a bad idea.

  1. IE6 – 23.46%
  2. IE7 – 21.37%
  3. Firefox 3.5 – 17.28%
  4. IE8 – 14.62%
  5. Firefox 3 – 12.52%
  6. Chrome – 4.38%
  7. Opera 9 – 2.20%
  8. Safari – 1.95%
  9. Firefox 2 – 1.27%
  10. Mozilla – 0.48%

It’s pretty interesting to see how many people are still using that old of a browser, probably the one their system came loaded with originally.  On the Firefox users, you see the opposite trend – most are using the newest and it tails off from there, probably what people “expect” to see.  The IE users start with the oldest and tail towards the newest!  You’d think that more people’s IT departments would have mandated newer versions at least.  I wish we could see what percentage of our users are hitting “from work” vs. “from home” to see if this data is showing a wide disparity between business and consumer browser tech mix.

Bonus stats – Top OSes!

  1. Windows XP – 76.5%
  2. Windows Vista – 14.3%
  3. Mac – 2.7%
  4. Windows NT – 1.8%
  5. Linux – 1.8%
  6. Win2k – 1.5%
  7. Windows Server 2003 – 1.2%

Short form – “everyone uses XP.”  Helps explain the IE6 popularity because that’s what XP shipped with.

Edit – maybe everyone but me knew this, but there’s a pretty cool “Market Share” site that lets people see in depth stats from a large body of data…  Their browser and OS numbers validate ours pretty closely.

Leave a comment

Filed under General, Uncategorized

Oracle + BEA Update

A year ago I wrote about Oracle’s plan on how to combine BEA Weblogic and OAS.   A long time went by before any more information appeared – we met with our Oracle reps last week to figure out what the deal is.  The answer wasn’t much more clear than it was way back last year.  They do certainly want some kind of money to “upgrade” but it seems poorly thought through.

OAS came in various versions – Java, Standard, Standard One, Enterprise, and then the SOA Suite versions.  The new BEA, now “Fusion Middleware 11g” comes in different versions as well.

  • WLS Standard
  • WLS Enterprise – adds clustering, costs double
  • WLS Suite – adds Coherence, Enterprise Manager, and JRockit realtime, costs quadruple

But they can’t tell us what OAS product maps to what FMW version.

There is also an oddly stripped down “Basic” edition which noted as being a free upgrade from OAS SE but it strips out a lot of JMS and WS stuff; there’s an entire slide of stuff that gets stripped out and it’s hard to say if this would be feasible for us.

As for SOA Suite, “We totally just don’t know.”

Come on Oracle, you’ve had a year to get this put together.  It’s pretty simple, there’s not all that many older and newer products.  I suspect they’re being vague so they can feel out how much $$ they can get out of people for the upgrade.  Hate to break it to you guys – the answer is $0.  We didn’t pay for OAS upgrades before this, we just paid you the generous 22% a year maintenance that got you your 51% profit margin this year. If you’re retiring OAS for BEA in all but name, we expect to get the equivalent functionality for our continued 22%.

Oracle has two (well, three) clear to dos.

1.  Figure out what BEA product bundles give functionality equivalent to old OAS bundles

2.  Give those to support-paying customers

3.  Profit.  You’re making plenty without trying to upcharge customers.  Don’t try it.

Leave a comment

Filed under General, Uncategorized

Velocity 2009 – Death of a Web Server

The first workshop on Monday morning was called Death of a Web Server: A Crisis in Caching.  The presentation itself is downloadable from that link, so follow along!  I took a lot of notes though because much of this was coding and testing, not pure presentation.  (As with all these session writeups, the presenter or other attendees are welcome to chime in and correct me!)  I will italicize my thoughts to differentiate them from the presenter’s.

It was given by Richard Campbell from Strangeloop Networks, which makes a hardware device that sits in front of and accelerates .NET sites.

Richard started by outing himself as a Microsoft guy.   He asks, “Who’s developing on the Microsoft stack?”  Only one hand goes up out of the hundreds of people in the room.  “Well, this whole demo is in MS, so strap in.”  Grumbling begins to either side of me.  I think that in the end, the talk has takeaway points useful to anyone, not just .NET folks, but it is a little off-putting to many.

“Scaling is about operations and development working hand in hand.”   We’ll hear this same refrain later from other folks, especially Facebook and Flickr.  If only developers weren’t all dirty hippies… 🙂

He has a hardware setup with a batch of cute lil’ AOpen boxes.  He has a four server farm in a rolly suitcase.  He starts up a load test machine, a web server, and a database; all IIS7, Visual Studio 2008.

We start with a MS reference app, a car classifieds site.  When you jack up the data set to about 10k rows – the developer says “it works fine on my machine.”  However, once you deploy it, not so much.

He makes a load test using MS Visual Studio 2008.  Really?  Yep – you can record and playback.  That’s a nice “for free” feature.  And it’s pretty nice, not super basic; it can simulate browsers and connection speeds.  He likes to run two kinds of load tests,and neither should be short.

  • Step load for 3-4 hrs to test to failure
  • Soak test for 24 hrs to hunt for memory leaks

What does IIS have for built-in instrumentation?  Perfmon.  We also get the full perfmon experience, where every time he restarts the test he has to remove and readd some metrics to get them to collect.  What metrics are the most important?

  • Requests/sec (ASP.NET applications) – your main metric of how much you’re serving
  • Reqeusts queued (ASP.NET)  – goes up when out of threads or garbage collecting
  • %processor time – to keep an eye on
  • #bytes in all heaps (.NET CLR memory) – also to keep an eye on

So we see pages served going down to 12/sec at 200 users in the step load, but the web server’s fine – the bottleneck is the db.  But “fix the db” is often not feasible.  We run ANTS to find the slow queries, and narrow it to one stored proc.  But we assume we can’t do anything about it.  So let’s look at caching.

You can cache in your code – he shows us, using _cachelockObject/HttpContext.Current.Cache.Get, a built in .NET cache class.

Say you have a 5s initial load but then caching makes subsequent hits fast.  But multiple first hits contend with each other, so you have to add cache locking.  There’s subtle ways to do that right vs wrong.  A common best practice patter he shows is check, lock, check.

We run the load test again.  “If you do not see benefit of a change you make, TAKE THE CODE BACK OUT,” he notes.  Also, the harder part is the next steps, deciding how long to cache for, when to clear it.  And that’s hard and error-prone; content change based, time based…

Now we are able to get the app up to 700 users, 300 req/sec, and the web server CPU is almost pegged but not quite (prolly out of load test capacity).  Half second page response time.  Nice!  But it turns out that users don’t use this the way the load test does and they still say it’s slow.  What’s wrong?  We built code to the test.  Users are doing various things, not the one single (and easily cacheable) operation our test does.

You can take logs and run them through webtrace to generate sessions/scenarios.  But there’s not quite enough info in the logs to reproduce the hits.  You have to craft the requests more after that.

Now we make a load test with variety of different data (data driven load test w/parameter variation), running the same kinds of searches customers are.  Whoops, suddenly the web server cpu is low and we see steady queued requests.  200 req/sec.  Give it some time – caches build up for 45 mins, heap memory grows till it gets garbage collected.

As a side note, he says “We love Dell 1950s, and one of those should do 50-100 req per sec.”

How much memory “should” an app server consume for .NET?  Well, out of the gate, 4 GB RAM really = 3.3, then Windows and IIS want some…  In the end you’re left with less than 1 GB of usable heap on a 32-bit box.  Once you get to a certain level (about 800 MB), garbage collection panics.  You can set stuff to disposable in a crisis but that still generates problems when your cache suddenly flushes.

  • 64 bit OS w/4 GB yields 1.3 GB usable heap
  • 64 bit OS w/8 GB, app in 32-bit mode yields 4 GB usable heap (best case)

So now what?  Instrumentation; we need more visibility. He adds a Dictionary object to log how many times a given cache object gets used.  Just increment a counter on the key.  You can then log it, make a Web page to dump the dict on demand, etc.  These all affect performance however.

They had a problem with an app w/intermittent deadlocks, and turned on profiling – then there were no deadlocks because of observer effect.  “Don’t turn it off!”  They altered the order of some things to change timing.

We run the instrumented version, and check stats to ensure that there’s no major change from the instrumentation itself.  Looking at cache page – the app is caching a lot o fcontent that’s not getting reused ever.  There are enough unique searches that they’re messing with the cache.  Looking into the logs and content items to determine why this is, there’s an advanced search that sets different price ranges etc.  You can do logic to try to exclude “uncachable” items from the cache.  This removes memory waste but doesn’t make the app any faster.

We try a new cache approach.  .NET caching has various options – duration and priority.  Short duration caching can be a good approach.  You get the majority of the benefit – even 30s of caching for something getting hit several times a second is nice.  So we switch from 90 minute to 30 second cache expiry to get better (more controlled) memory consumption.  This is with a “flat” time window – now, how about a sliding window that resets each time the content is hit?  Well, you get longer caching but then you get the “content changed” invalidation issue.

He asks a Microsoft code-stunned room about what stacks they do use instead of .NET, if there’s similar stuff there…  Speaking for ourselves, I know our programmers have custom implemented a cache like this in Java, and we also are looking at “front side” proxy caching.

Anyway, we still have our performance problem in the sample app.  Adding another Web server won’t help, as the bottleneck is still the db.  Often our fixes create new other problems (like caching vs memory).  And here we end – a little anticlimactically.

Class questions/comments:
What about multiserver caching?  So far this is read-only, and not synced across servers.  The default .NET cache is not all that smart.  MS is working on a new library called, ironically, “velocity” that looks a lot like memcached and will do cross-server caching.

What about read/write caching?  You can do asynchronous cache swapping for some things but it’s memory intensive.  Read-write caches are rarer- Oracle/Tangosol Coherence and Terracotta are the big boys there.

Root speed –  At some point you also have to address the core query, it can’t take 10 seconds or even caching cant’ save you.  Prepopulating the cache can help but you have to remember invalidations, cache clearing events, etc.

Four step APM process:

  1. Diagnosis is most challenging part of performance optimization
  2. Use facts – instrument your application to know exactly what’s up
  3. Theorize probable cause then prove it
  4. Consider a variety of solutions

Peco has a bigger twelve-step more detailed APM process he should post about here sometime.

Another side note, sticky sessions suck…  Try not to use them ever.

What tools do people use?

  • Hand written log replayers
  • Spirent avalanche
  • wcat (MS tool, free)

I note that we use LoadRunner and a custom log replayer.  Sounds like everyone has to make custom log replayers, which is stupid, we’ve been telling every one of our suppliers in at all related fields to build one.  One guy records with a proxy then replays with ec2 instances and a tool called “siege” (by Joe Dog).  There’s more discussion on this point – everyone agrees we need someone to make this damn product.

“What about Ajax?”  Well, MS has a “fake” ajax that really does it all server side.  It makes for horrid performance.  Don’t use that.  Real ajax keeps the user entertained but the server does more work overall.

An ending quip repeating an earlier point – you should not be proud of 5 req/sec – 50-100 should be possible with a dynamic application.

And that’s the workshop.  A little microsofty but had some decent takeaways I thought.

Leave a comment

Filed under DevOps, Uncategorized