AWS re:Invent Keynote Day 2 Takeaways

TL;DR – performance improvements and two huge announcements, Docker-based EC2 Container Service and cloud-CEP-like AWS Lambda.

I was in a meeting for the first 45 minutes but I hear I didn’t miss much. Happy customer use cases.

The first big theme of this morning’s keynote is “Containers” – often just shorthand for “docker.”  I went to a previous event here in town with even large enterprises and government – State of Texas, Microsoft, Dell, Red Hat – all freaking out about Docker. Docker is similar to VMWare or cloud in that it is a new technology that requires new monitoring and management just for it. (Heck, Eric, the CopperEgg founder, is now running a startup around docker container management, StackEngine.)

  1. Keynote from pristine.io about how they implemented. Docker, the new low overhead containerization technology, is a heavily cited part of the power (they actually used Flux7 as the expert consultants, they’re based here in Austin!).
  2. Keynote from Werner Vogels on the new “Amazon EC2 Container Service,” announced to cheers and applause. It allows launching and terminating containers to sets of instances on EC2. Their PM did a demo where they had a big farm of r3 servers and then they deploy a redis cluster and rabbitmq across them, and then front end components on a farm of c3s, and then audio processing across all of them. If you’re new to this it’s basically VMs within VMs but without noticeable overhead.
EC2 Container Service

EC2 Container Service

  1. Next they had the actual docker cofounder and CEO Ben Golub. He mentioned that docker is only 18 months old and its huge success and ecosystem this early in is “surreal.”

Next… Leapfrogging PaaS?

  1. Werner is back to announce AWS Lambda available now in preview – event-driven computing service for dynamic applications. No instance running/management required, events go in and “cloud functions” run on them.  Holy shit, this replaces a large number of servers running semi-trivial apps. 20 cents per million requests, plus some complex stuff for seconds of execution – free for 3.2M seconds/1M requests.

    Amazon Lambda

    Amazon Lambda

  2. Netflix chief product guy came on to show how they’re using lambda as a higher level abstraction and have eliminated a bunch of servers – no system monitoring/management, no inefficient polling, no gaps/opacity. They’re using it to encode video, run backups, run security and compliance checks against instances, and for operational monitoring and dashboards. Replacing procedural control systems with event-driven services.
  3. AWS core innovations… New c4 instance, Haswell based (crazy fast processor, 36 vCPUs). Diane Bryant, SVP/GM Data Center Group from Intel, came on to go into the CPU specifically. Larger and faster EBS volumes, up to 20,000 IOPS. Enhanced and consistent networking speeds.

And this has been your cloud update! Also see Ben Kepes in Forbes for a similar summary.

The container engine is cool – it’ll certainly remove a lot of instance gerrymandering and instance reservation pain if nothing else. But Lambda is the potential disruptor here.  It’s taking the idea of “bring your own algorithm” from MapReduce and saying “hmmm you can probably replace your trivial web app just with this” – it’s halfway between a PaaS and a SaaS, none of the Beanstalk complexity, just “here take this function and run it on stuff when it comes in.” If a library of common lambas becomes available, so much computing work done for trivial purposes becomes obsoleted.  Who hasn’t seen a Web service to “upload a file here, then zip it or something, then store it…” OK, no servers needed any more. Very interesting.

Leave a comment

Filed under Cloud, Conferences

AWS re:Invent Keynote Day 1 Takeaways

Sadly I couldn’t attend this year, but heck that’s what the Internet is for.  Here’s the interesting bits from the AWS re:Invent Day 1 keynote (livestreamed here). Loads of interesting stuff.

  1. AWS is growing revenue >40% YOY, far outstripping other large IT companies – EC2 use grew 99% YOY and S3 usage 137%, they have 1M active customers now. (Microsoft cloud services report 128% YOY growth as well.)
  2. New product announcement for Aurora – new commercial-grade database engine – fully MySQL compatible but 5x the performance, available through Amazon RDS, 1/10 the cost of the commercial DB engines (starts at 29 cents an hour, ~$210/mo). Can do 6M inserts/second and 30M selects/second. Highly durable (11 9’s), crash recovery in seconds with no data loss. Nice!
  3. SLDC stuff!
    1. CodeDeploy (was internal tool called Apollo), a new code-deployment system that lets you do rolling updates, rollbacks, and tracks deployment health. This works for all languages and is free. They use it internally for 95 deploys/hour on their own stuff.
    2. In early 2015 will come some more software lifecycle management services – first is CodePipeline for continuous integration and deployment (also used internally)
    3. Second is CodeCommit as a managed code repository that can colocate with where you’re going to deploy and has no size limits of repos or files. These “integrate with” github, jenkins, chef, etc. though it’s not clear how they don’t cannibalize them.
  4. Security stuff! Big push to be able to say “we easily surpass the security you can do on premise.”
    1. FISMA, ITAR, FIPS, FedRAMP, HIPAA, ISO 9001
    2. Current encryption approach is either “let Amazon manage keys” or use their CloudHSM hosted key thing, both of which are still a pain. As a result they’re launching AWS Key Management Service as a HA service that manages keys, provides one-click encryption and transparent key rotation.
    3. AWS Config is a new-gen agile CMDB with full visibility into all your AWS resources. You can query it and see relationships and show scope of a config change. Streams all config changes out to you.
    4. A new-gen service catalog called AWS Service Catalog available early 2015. Create and share product portfolios, let internal people launch them, tracking and compliance.
  5. Enterprise Cloud Adoption Patterns
    1. Often the first wave of moving into the cloud for enterprises is moving dev and test environments to run in AWS for flexibility and spin up/down for cost savings and  brand new apps, custom written for the cloud
    2. Second wave is web sites and digital transformation (media, corp sites, ecomm) and analytics, since mass processing and sharing is cheap in the cloud – data warehouses (like pfizer’s). And mobile app back ends – phone, tablet, gps, more.
    3. Third wave is business critical applications.  Macmillan and Hoya run their SAP in AWS. Conde Nast runs HR and Legal there.
    4. New wave – you’re starting to see entire datacenter migration and consolidation as DCs come up for lease (Hess, Conde Nast, NewsCorp). SunCorp. Time Inc., GPT, Nippon Express moving “all in” to AWS – many ISVs as well. The CIA moved to AWS and now Intuit is doing so now as well.
    5. Intuit moved their “TurboTax AnswerXchange” app there to deal with tax time peaks last year and the scales fell from their eyes when they did so – 6x cost cut, setup 1/5 of the time, faster development. They started doing more and realized the global datacenters, ease of integration with acquisitions, and dev recruiting benefits. They have 33 services on AWS now, and have moved mint.com there. They have decided to move everything else there now. Funny how once companies start looking at how much they accomplish instead of just the monthly cost the “cloud is more expensive at scale” argument gets dropped like a flaming bag of poo.
  6. Hybrid cloud
    1. Various stuff like directory service (AD in the cloud) and identity federation and storage gateway and SystemCenter and vCenter integration already exist to power mixed shops
    2. Johnson & Johnson went on for a while about their use of AWS.  They are planning a 25,000 seat deployment of Workspaces (virtual desktop offering, like Citrix).

Whew, that’s the quick notes version.  Aurora is obviously of interest – a lot of the fretting over whether to use mySQL or RDS I’ve seen will get settled by this – it was just ‘well, run the same thing yourself or have them do it…” and now it’s “have them run something insanely better”. But the SDLC tools are also interesting – they made noise about how these “work with!” ansible, jenkins, git, etc. but that seems mildly disingenuous, without any more looking into it yet they sound more like direct competition for them. But the config and service catalog could be great extensions – yay for simple composable services, not huge painful “BSM/ITMOM suites”.

Feel free and share your thoughts on the announcements in the comments section!

3 Comments

Filed under Cloud, Conferences

Scrum for Operations: Just Add DevOps

OK, so it’s been a while since the last installment in this series. I had talked about how we’d brought Scrum to our operations team, which worked fine, and then added in the developers as well, which didn’t.  Our first attempt at dividing the whole product up into four integrated DevOps service teams collapsed quickly. Here’s how we got past that to do it again and succeed, with a fully integrated DevOps setup managing both proactive/project/feature work and reactive/support/tactical work in an effective way.

The first challenge we had was just that I couldn’t manage four new scrum teams totally on my own. We had gotten the ops team working well on Scrum but the development team hadn’t been. We didn’t have any scrum masters other than me, and were low on people designated as tech leads as well. So step one, we just mushed everyone into a 30+-person Scrum team, which sucked but was the least of the evils, and I immediately worked on mentoring people up as both Scrum masters and tech leads. I basically led by example and asked for volunteers for who was interested in Scrum mastering and/or being a tech lead for each of the planned sub-teams.

This was interesting – people self-selected in ways I would not have predicted. Some of the willing were employees and others were contractors – fine, as far as I’m concerned. “From each according to his ability, to each according to his need.” I then maximally had them start to lead standups and take ownership over sprints and such, coaching them in the background.  In that way, it only took a little while to re-form up into those four teams. This gave some organic burn-in time as well, so that the ops folks got more used to not being “the ops team” and leaning on people that were supposed to be on other sprint teams. As I empowered the new leaders this became self-correcting (“Stop working with that other guy on their stuff, we have things that need doing for our service!”).

The second and largest problem was managing the work.

Part 1: “Planned” Work

Product Management

We had practically zero product manager support at the beginning, so it was up to us to try to balance planned work from many sectors – feature requests from PM, tooling requests from Customer Service and Technical Support, security stuff from Security, random requests from Legal and Compliance, brainwaves from other products’ engineering managers. It quickly began to take a full person worth of time to handle, because if not managed proactively it turned into attending meetings with people complaining as a reactive result. I went to the PM team and said “come on man, give us more PM support,” and once we got one, I worked with her on being able to manage the whole overall package.

One of the chronic product manager problems in a SaaS environment is the “not my domain” syndrome.  PMs want to talk about features, but when it comes to balancing stakeholders like Security and internal operational projects, they wash their hands of it. At best you get a “Well… You guys get 50% of your time to figure out what to do with that stuff and then 50% of your bandwidth will be my features, OK?”

For the record, that’s not OK.  As a product manager, you need to be able to balance all the work against all the work. Maybe you don’t have an ops background, that’s fine – you probably didn’t have a <business domain here> background when you came to work either.  Learn.  A lot of the success of a SaaS product is in the balancing of features against stability/scalability work against compliance work… If you want to take the “I’m the CEO of the product” role, then you need to step up and own all of it, otherwise you’re just that product’s Director of Wishful Thinking.

Anyway, luckily our PM, even though she didn’t have experience doing that, was willing to learn and take it on. So we could reason about spending a month putting in a new cluster versus adding a new display feature – they all went into the same roadmap.

Work Management in JIRA

We managed that in the standard, simple way of a quarterly spreadsheet roadmap correlating to epics in an Agile rapid board in JIRA; we’d do stakeholder meetings and stack rank them and then move them into the team backlogs and flesh them out when they were ready for execution. (It was important to keep the clutter out of the backlogs till then – even if it’s supposed to be future stuff, the more items sitting in a team’s backlog, the worse their focus is.)

We kept each service as one JIRA project but combined them into rapid boards as necessary – a given team might own a couple (like the “Workbench and CMS Team” had those two and a smaller tooling one). This way when we transferred a piece of tech around we could just move the JIRA project as well, and incorporate it into the target team’s rapid board.

Portfolio Management

Some people say that the ideal world is each team owning one microservice. I don’t agree with this – we had a number of teams under other parts of the org that were only like 2 people because they owned some little service. This was difficult to sustain and transition; when things like Black Friday came up that required 24×7 support from each team for a week it was brutal on them, and even worse once development eased up on a service it just got orphaned.

If you don’t keep a service portfolio and tightly manage who is tasked with supporting each one, you are opening yourself up to a world of hurt.  And that’s where we started. Departmentwide, we’d have teams work on something and then wander off and if there was a problem they’d call on the dev who was somewhere working on some other deadline now. This worked terribly. I got the brunt of this earlier when I had joined the company as a release manager and asked “So, those core services, who owns those?” “Well… No one.  I mean everyone!”

So for my teams, we put together a tracking list, used both as a service registry but also for cross-training. It was a simple three-level outline, with major services, service components, and low level elements. We had the whole team work together on filling out the list – since we were managing a 7-year-old legacy system we ended up with a list of 275 or so leaf items.  Every one had to have an owning team (team, not individual), and unless you retired a service, there was no dropping it on the floor. You owned it, you retired or transitioned it, and I didn’t care if “the dev that worked on that moved to some other team.” Everything was included, from end user facing services like “the user portal” to internal services like our CI servers.

Team Management

This transitions into how we managed the teams. Teams were standard “two-pizza size” – a team of 5-7 people is optimal and we would combine services up till there was enough work for a team of that size.  This avoided the poor coverage of mini-teams for micro-services.

Knowledge Management

Then we also used the service registry as the “merit badge system.” We had a simple qualification procedure – to get qualified in an element, you had to roll out code on it and you could sign off for yourself to say that you were qualified on one of those leaf elements.  To get your “merit badge” in a service component, you needed an existing subject matter expert (SME) to sign off that you knew what you were doing, and you needed to understand how the service was written, deployed, tested, monitored. To become a SME in a service, the existing SMEs would have a board of review. SMEs were then understood to be qualified to make architectural changes on their own initiative, those with merit badges were understood to be qualified to make code changes with nothing more than the usual code review and CI testing.

This was very important for us because we were starting in a place where people had been allowed to specialize too much.  Only one person was the SME on any given service and if the person who didn’t understand the Workbench was out, or quit, or whatever, suddenly no one knew a whole lot about it.  That’s a terrible velocity burden even without attrition and it’s a clear and present danger to your service’s viability when there is.  We started tracking the “merit badges” and had engineers set goals around how many they’d earn (or how many they’d review, for the more experienced). We used a lot of contract programmers and I told the contractor manager that I wanted to use that to rate the expertise of people on our account and that I wanted to see the numbers rise on each person over time.

Part 2 – “Unplanned” Work

Our team was only doing planned work 40% of the time, however.  Since we were integrated DevOps teams working on a service with thousands of paying end customers, and that service was custom-integrated with customer Web sites and import/export of customer data, there was a continuous load of work coming in from Customer Support and from our own operations (alerts, etc.). All this “flow” work was interrupt-driven, and urgent to one degree or another.

The usual techniques for handling this have a lot of problems.  “Let’s make a separate sustaining team!” Well, this turns into a set of second class developers.  Thus you get worse devs there on that team. And those devs are more utilized one week, less the next; when things are quiet they get seconded to other efforts by people that haven’t read the Principles of Product Development Flow and think having everyone highly utilized is valuable, and then the load ramps up and something gives… Plans emerge to make it a training ground for new devs, till you realize that even new devs don’t want to put up with that and just quit… I’ve seen this happen in several places. I am a firm believer in dog fooding – if you are writing a service, you need to handle its problems.  If there are a lot of problems, fix it!!!  If you are writing the next version of the service – well, doing that in isolation from the previous service dooms you to making the same errors, adding in some new ones as well. No sustaining teams, only evergreen teams.

So we had the rule that each integrated team of devs and ops handled their own dirty laundry. And at 60% of the workload, that was a lot of laundry.  How did we do it? Everyone was worried about how we could deliver decent feature velocity while overwhelmed by flow work.  Therefore…

Triage

In addition to the teams’ daily standups, we had a daily triage meeting where the engineering manager, team leads, PM, and representatives from Support, Sales, and whoever had a crisis item that they felt needed to be handled on an expedited basis (not waiting on the next sprint, in other words) would come to. Each new intake would be reviewed. In the beginning we had to kick a lot of requests back for insufficient detail, but that corrected itself fast. We’d all agree on priority – “that’s affecting one customer but they’re one of our top customers, we’ll work it as P2″ or the like.

For customer reported issues, we had SLAs we committed to in contracts (1 day for P1, etc). Ops issues could be super urgent – “P1, one of our services is down!” – or doable later on. So what we did was to create a separate Kanban board in JIRA for all these kinds of issues. Anything that really could wait or was large/long term would get migrated into the Scrum backlogs, but anything where “we need someone to do this stat” went in here.  It served the same purpose as an “expedite lane,” it was just a jumbo four-lane highway of an expedite lane because so much of the work was interrupt driven.

But does this mean “give up on Scrum?”  No. Without the sprint cadence it was hard to hit a release cadence, and also it was easy for engineers to just get lost in the soup and stop delivering at a good rate, frankly. So we still ran sprints, and here was the rules of engagement we provided the engineers.

Need More Work?

  1. Pull things for your team/service off the triage queue
  2. If there’s nothing in the triage queue, pull the next item from the sprint backlog
  3. If there’s nothing in the sprint backlog or the triage queue, pull something off the top of the backlog. Or relax, either one.

Then for standups, we had a master Agile board that contained everything – all projects, the triage board, everything.  So when you looked at a given engineer’s swimlane, you could see the sprint work they had, the flow work they had, and anything they were working on from someone else’s project (“Hey, why are you doing that?”). Again, via JIRA agile board composition that’s easy to do. Sometimes teams would try to do standups just looking at swimlanes containing “their” projects and it always ended up with things falling in the gap, so each time that happened I reiterated the value of seeing everything that person is assigned, not just what you think/hope they are assigned, since they are exclusively attached to that one team.

At first, everyone fretted about the conflict between the flow work and the sprint work.  “What if there’s a lot of support work today?!?” But as we went on sprint by sprint, it became evident that over the course of a two week sprint, the amount of support and operations work evened out.  Sprint velocity was regular despite the team working flow work as well.  Having full-sized sprint teams and two-week iterations meant that even if one day there was a big production issue and whoever grabbed it was slammed – over the rest of the time and people it evened out. This shouldn’t be too surprising, it’s called “flow” for a reason – there are small ebbs and surges but in general the amount over time at scale is the same. Was it as perfectly “efficient” as having some people working flow and others working sprints?  No, there is definitely a % overhead that incurs. But maximum utilization of each person’s time, as I mentioned before, is a golden calf. Lean principles show us that we want the best overall outcome – and this worked.

Flow work was addressed quickly, and our customer ticket SLA attainment percentage whipped up from a starting level of less than 50% over several quarters to sit at 100%, meaning every single support ticket that came in was addressed within its advertised SLA. Once that number hit 100% the support ticket time to live started to fall as well.

At the same time, sprint velocity for each of the four sprint teams went up over time – that is, just the story points they were delivering out of the feature backlog improved, in addition to the improvements in flow work. We’d modify the process based on engineer feedback and try alterations for a sprint to see how they panned out, but in general by keeping the overall frame in place long enough that people got used to it, they became more and more productive. Flow work and planned work both improved at the same time, not at each others’ expense. 

The Dynamic Duo

This scheme had two issues.  One was that engineers were sometimes confused about what they should be working on, flow tickets with SLAs or their sprint tasks. Our Scrum masters were engineers on the teams too, they weren’t full time PMs that could afford to be manually managing everyone’s work. The second was that operational issues that came in and required sub-day response time couldn’t wait for triage and ended up with either frantic searches for anyone who could help (which often became “everyone sitting in the team area”) or missed items.

I have always been inspired by Tom Limoncelli’s Time Management for System Administrators. He advocates an “interrupt shield” where someone is designated as the person to handle walkups and crises.  At NI I had instituted this in process, at BV in the previous Ops team there had been a “The Dude” role (complete with Jeff Bridges bobblehead) that had been that. Thus the Dynamic Duo were born.

The teams each had one dev and one DevOps on call at a given time; we managed the schedule in PagerDuty.  Whoever was on call that week became “on the Dynamic Duo” during the days as well.  When we went into a sprint, they would not pull sprint tasks and would be dedicated to operational and urgent support issues. It was the Dynamic Duo because we needed someone with ops and someone with dev expertise on this – one without the other meant problems were not effectively solved. I even made a cool wiki page with Batman and Robin stuff all over it and we got Bat-phones painted red. I evangelized this inside the company.  “Don’t walk up and grab a dev with your question. Don’t chat your favorite engineer to get them to work on something. Come turn on the Bat-signal, or call the Bat-phone, and the Dynamic Duo will come help you.”

This was good because it blunted the sharp tip of very urgent requests.  The remaining flow work (2 people couldn’t handle the 60% of the load that was interrupt driven) was easier for the sprinting devs to pull in as they finished sprint tasks without worrying about timeliness – the real crises were handled by the Dynamic Duo. The Duo also ran the triage meeting and even if they weren’t working on all the triage work, they bird dogged it as a kind of scrum/kanban/project manager over the flow work. In the rare case there wasn’t anything for them to jump on, they could slack – time to read, learn, etc. both as compensation for the oncall and adrenaline rushes that week but also because it’s hard to fit time for that into sprints… And as we know from the Principles of Product Development Flow, running teams at too high of a utilization is actually counterproductive.

Conclusion

That’s the short form of how we did it – I wanted to do a lot more but I realized that since it’s been a year since I intended to write this, I’d better shake and bake and get it out and then I’m happy to follow up with stuff any of you are curious about!

This got us to a pretty happy place.  As time went on we tweaked the sprint process, the triage process, and the oncall/Duo process but for a set of teams of our size with our kind of workload it was close to an optimal solution. With largely the same team on the same product, the results of these process changes were:

  • Flow work improved as measured by customer ticket SLA attainment and other metrics
  • Sprint work velocity improved as measured by JIRA reports
  • Engineering satisfaction improved as measured by internal NPS surveys

Improvement of all these factors was not slight, but was instead 50% or more in all cases.

Feel free and ask me about parts of this you find interesting and I’ll try to expand on them. It wasn’t as simple as “add Agile” or “add DevOps,” it definitely took some custom wrangling to balance our specific SaaS service’s needs in the best manner.

2 Comments

Filed under Agile, DevOps

The Cloud Procurement Pecking Order

I was planning to go to this meeting here in town about “Preparing for the post-IaaS phase of cloud adoption” and it brought home to me how backwards people are generally thinking about cloud. So now you get Ernest’s Cloud Rant of the Day.

What people are doing is moving in order of comfort, basically.  “I’ll start with private cloud… Then maybe public IaaS… Eventually we’ll look at that other whizbang stuff.” But here’s what your decision path should be instead.

Cloud Procurement Flowchart

  1. Is it available as a SaaS solution?  If so, use that. You shouldn’t need to host servers or write code for many of your needs – everything from email to ERP is commoditized nowadays.
  2. Can I do it in a public PaaS?  Then use a public PaaS (Heroku/Beanstalk/Google App Engine/Azure), unless you have some real (not FUD) requirements to do it in house.
  3. Can I do it in a private PaaS? Then use Cloudfoundry or similar. Or do you really (for non-FUD reasons) need access to the hardware?
  4. Can I do it in public IaaS? Then use Amazon. Or do you really (for non-FUD reasons) need it “on premise” (probably not really on premise, but in some datacenter you’re leasing - which is different from being outsourced in the cloud why)?
  5. Can I do it in a private cloud? This is your final recourse before doing it the old fashioned way – unless you have extremely unique hardware requirements, you probably can. Also, you can do hybrid cloud – basically private cloud plus public cloud (IaaS only really). This gets you some of the IaaS benefits while complicating your architecture.

What About The Cost?

This seems to be inverted from how people are inching into the cloud. But the lower on this list you are, the less additional value you are getting from the solution (assuming the same price point). You should instead be reluctantly dragged into these lower levels – which require more effort and often (though not always) more expense. “But what about the cost,” you say, “the cloud gets more expensive than me running a couple servers?”

You need to keep in mind the real costs of your infrastructure when you do this – I see a lot of people spending a lot of work on private cloud that really shouldn’t be. If you simply compare “buying servers” with “cost per month in Amazon” it can seem like you need to go hybrid after a couple hundred thousand dollars appear on your bill. But:

1. Make sure you are taking into account your fully loaded cost (includes data center, power cooling, etc.) of all assets (servers, storage, network…) you are using to do this private. Use the real numbers, not the “funny money” numbers – at a previous company we allocated network and other shared costs across the entire company, while “our IT budget” had to pay for servers – don’t be a goon,  you should consider what it’s costing your entire company. Storage especially is way cheaper in the cloud versus enterprise SANs. 

2. Make sure you are taking into account the cost of the manpower to run it.  And that’s not just their salary (fully loaded with benefits/bonuses), and the proportion of each layer of management going up that has to deal with their concerns (Even if the director only has to spend 30% of his time messing with the data center team, and the VP 10%, and the CTO 5%, and the CEO 1% – that’s a lot of freaking money). It’s also the opportunity cost of having people (smart technical people) doing your plumbing instead of doing things to forward your company.  I would argue that instead of putting in the employee’s salary, you’d do better to put in your revenue per employee!  Why? Because for that same money you could be having someone improving product, making sales, etc.  and making you additional revenue. If all you look at is “cost reduction” you are probably divorced enough from the business goals of your organization that you are not making good decisions. This isn’t to say you don’t need any of that manpower, but ideally with more plumbing being outsourced you can turn their technical skills to something of more productive use. 

3. Make sure you are taking into account the additional lag time and the cost of that time to market delay from DIYing. Some people couch this as just for purposes of innovation – “well, if you’re a small, quick moving, innovative firm or startup, then this velocity matters to you – if you’re a larger enterprise, with yearly budget cycles, not so much.” That’s not true. Assuming you are implementing all this stuff with some end goal in mind, you are burning value along with time the longer it takes you to deliver it – we like to call that cost of delay. Heck, just plain cost of money over that period is significant – I’ve seen companies go through quite a set of gyrations to be able to bill 30 days earlier to get that additional benefit; if you can deliver projects a month earlier from leveraging reusable work (which is all SaaS/PaaS/IaaS are) then you accelerate your cashflow. If you have to wait 12 months for the IT group to get a private cloud working, you are effectively losing the benefit of your deliverable * 12 months.

4. Account for complexity.  The problem with “hybrid cloud,” in most implementations, is that it’s not seamless from on prem to public, and therefore your app architecture has to be doubly complicated.  In a previous position where I ran a large SaaS service, we were spread across AWS (virtual everything) and Rackspace (vserver, F5 LBs, etc.) and it was a total nightmare – we were trying to migrate all the way out to the cloud just so we could delete half of the cruft in all our code that touched the infrastructure – complexity that caused production issues (frequently) and slowed our rate of delivering new functionality. The KISS principle is wrathful when ignored.

I’m not saying hybrid cloud, private cloud, etc. are never the answer – but I would say that on average they are usually not the right answer, and if you are using them as your default approach then it’s better than even money you’re being inefficient. Furthermore, using SaaS and PaaS requires less expertise than IaaS which uses less than private cloud – people justify “starting with private” because you are “leveraging skill sets” or whatever – and then 6 months later you have a whole team still trying to bake off OpenStack vs Eucalyptus when you could have had your app already running in a public PaaS. I’m not sure why I need to say out loud “delivering the most amount of value with the least amount of effort, time, and expenditure is good” – but apparently I do. Just because you *can* do something does not mean you *should* do it.  You need to carefully shepherd your time to delivery and your costs, and not just let things float in a morass of IT because “these things take time…” 

4 Comments

Filed under Cloud

AWS Dying! Rackspace Pulls Out Of Cloud! News At 11!

Boy, it’s been quite a week for the cloud-schadenfreude crowd. If you listen to the various news outlets, apparently Rackspace has given up on cloud and Amazon is in free-fall. Here’s some representative hack jobs pieces:

More accurate are these:

Let’s look at what’s actually going on.

First, Rackspace.  I was on the Spiceworks forum yesterday and the news is definitely being interpreted as “Rackspace is getting out of cloud, don’t consider them any more.” Now, it is their own fault for bungling the messaging here, but if you actually go look at what they are doing, at its heart they are making this change:

Rackspace Cloud will be sold only with a support contract now.

Yes, that’s it.  That’s the change. Now it’s “managed cloud!” Which is fine, a heck a lot of software I buy has mandatory maintenance contracts nowadays, but this doesn’t mean “Rackspace is leaving the cloud business!” They just want to add in their “Fanatical Support ™” to the value proposition and not compete purely on a bare-metal (bare-API?) SaaS “how much does a 2-CPU 4 GB server cost” basis.

Rackspace has to get back out in front of this messaging hard – it’s definitely made its way to the practitioner trenches as “they’re pulling out.” I mean, I have to say Rackspace’s strategy is pretty opaque to most folks, but this message misstep has graduated from “muddled and unclear” to “actively harmful.”

Now, Amazon.  The real story is:

Amazon Web Services only grew 38.39% last quarter.

For a large company that’s a pretty good growth rate, right, is yours higher? The press likes to turn IaaS into a 3 provider horse race. But so far – it’s not. Check out this recent (March 2014) Synergy Research graph.

CIS_Q413_graphic

The fact of the matter is that Amazon is beating the holy hell out of everyone else in IaaS. It’s more neck and neck in PaaS, but sadly the entire PaaS market is still low (due to Joe Average IT Shop basically interpreting PaaS pitches as someone standing up and screaming “I’m a sorcerer!!!”).

IBM, HP, etc. don’t have credible offerings yet.  I know they’re investing, I know they have roped some random companies that love them into doing it, but they are just not there yet. IBM is not a commodity company, they’re a “you have a billion dollar contract with us we’re going to build out whatever we feel like with that.”

Google, same thing. It’s cool, it’s well priced, it’s dev friendly – but at the big price cut announcement, we had a big get-together at Capital Factory here in town. I looked around at the crowd of 40 clouderati types and said “OK, so who is comfortable running production apps on Google cloud yet?” Result: zero. Google’s throwing money at it but as with most of Google’s new offerings, it’s hard to trust it’s not just going to dry up tomorrow and get cancelled because they are running after private spaceships or whatever now, and nothing makes them money like their ad business so “it’s revenue generating” won’t save it. And Google is so bad at enterprise support…

Microsoft Azure was really good. Better than it had a right to be!  I was very impressed with Azure in years 1 and 2. Execution was good (we used it for a SaaS service at National Instruments) and the vision was definitely “where the puck is going to be.” But post-Ozzie, it hasn’t exactly been shaking the sheets. At CloudAustin there was more Azure interest two years ago than there is now. They were going strong on dev friendliness and all, but trying to get into IaaS has been a distraction and they just aren’t keeping pace with Amazon’s rate of new features. Docker support, SSDs, new instances, vCenter integration, Dropbox competitor, desktop-as-a-service Citrix competitor…

Let me address the four big “why AWS is crashing and burning (despite being in an obvious position of market dominance)” points from the “Scorpion” article.

  1. AWS is not the low price provider.
    Eh. Not sure why this is relevant and also not sure it’s true for what you are getting… It’s like saying “there are books cheaper than that book you just bought.” Well sure there are, but do they have the information I want in them? See below for why not always having the lowest cent per minute under Google and Microsoft doesn’t really concern me.
  2. AWS is not the best product at anything – most of their features are mediocre knock offs of other products.
    This misses the point – their features are SIMPLER knockoffs of other products. That’s why it’s an accelerator. Dropbox and Salesforce and all the successful cloud entities have said “you know, some enterprise user left to their own devices is going to generate a list of 1000 requirements they don’t really need. Forget that. Let’s make the actual core functionality they need and leave off the rest so it’ll actually get used.” This is why they dominate the IaaS business. Many of their products are named to match. “SIMPLE email service.” “SIMPLE queue service.” “SIMPLE notification service.” This drives a new wave of architectural thought – instead of complicated services packed with stuff, what if instead I integrate simple, well-designed microservices? After doing a lot of cloud architecture work, those attributes are positives, not negatives.
  3. AWS is unbelievably lousy at support.
    I’m not sure I’d want to be in a race with Amazon, Microsoft, and Google to see who supports customers worse. I’m not sure I’ve ever been part of an enterprise happy with its Google support, and all experiences I’ve had with Microsoft support have been some Brazil-esque “you can’t actually ask them questions, only some VP is a designated contact on the corporate contract…”. Amazon is positioning themselves more like a hardware vendor, you don’t bother getting much support from them besides parts replacement, you get support from the managed hosting provider or whatnot that’s a MSP on top of them if you need it.
  4. Once you are at $200k / month of spend, it’s cheaper and much more effective to build your own infrastructure
    This is frequently untrue and based on people not understanding the full costs of getting stuck in the infrastructure business. What’s your cost of delay? Average enterprise “wait for servers” time is about 6 weeks; assuming you’re not just using them for nothing, your ROI is delayed by that amount. And what about all the operation of those complex systems? You can’t just stick in the salary of the developers and sysadmins you’d need – stick in your revenue per employee instead, because that headcount could be doing something useful for your company instead of plumbing. Not to mention the cascading percentage of each layer of management’s time spent worrying ab out the plumbing and the plumbers instead of conducting the core business of the company. Cost of delay from lost agility and opportunity cost are never taken into account but definitely should be.

I know a lot of the old guard want cloud to dry up and go away, it bothers their lovely datacenters.  And some of the very new guard resent it because Amazon continues to be so successful – they keep up a rate of innovation that new players can’t disrupt. But this whole week of “the cloud is falling” news is complete BS, and won’t amount to much.

Leave a comment

Filed under Cloud

Friday is System Administrator Appreciation Day

Don’t forget, this Friday 7/25 is the annual celebration of System Administrator Appreciation Day.  Start dropping hints to your coworkers about your treat of choice now!  “DevOps means you have to care!(tm)”

Leave a comment

Filed under DevOps

Velocity 2014 After Action Report

An Average Velocity Session

An Average Velocity Session

Well, it was my first Velocity (I’ve been to every one, 2008 to present, you can read the previous reports here on the blog) as a vendor!  So that was different, and I split time between working the Copperegg booth and going to sessions.  As a result I’m not going to do the extensive session-by-session notes I’ve done in the past.  Two other Agile Admins, James and Karthik were there, I’m hoping they do some writeups of sessions they attended too!

Being a vendor was interesting; though standing at the booth made my dogs bark after the day was over, it was great to be able to talk to so many people. There were a lot of monitoring providers at the show (Copperegg (us), Compuware, New Relic, Datadog, many more).  Pingdom was right across from us, with a slate of guys shipped in from Sweden, but they were generally grumpy - jet lag or their recent acquisition, perhaps. A new log management SaaS provider was there, logentries.com, and that was interesting – Sumo is the only real one in the space since Loggly and SplunkStorm borked it up and they’ve been getting a little… “Enterprise-y?” By that I mean having sales reps call you 5x/day and wanting near-Splunk prices.  So yay to the newcomers, competition is always good. Other than that, it was mostly the same slate of Velocity-vendors as usual.

What’s New

Well, let’s get it out of the way – there wasn’t all that much new this year. Karthik complained to me that “last year, Velocity was my favorite conference ever, and this year I didn’t get much out of it.” Not every year hosts a bunch of new techniques, sadly, but I thought there were some gems in there.  Here’s the major four new trends taking up speech-space:

Docker docker docker containers containers containers. Learn it now because in a year everything will be in containers – no, seriously. Largest splash in computing since Amazon AWS. The hype is a little overexcited at times but there’s a lot of new development going on here.  On the one hand, not everyone needs new-box spinup in 5s instead of 5m and the efficiency gains are a tradeoff for security – but to be blunt, people stopped well short of exercising the elasticity and ephemerality of cloud/virtualization solutions, instead going for the more comfortable “let’s deploy a three tier app manually like we did back in the day, but in the cloud” and so containers will be a disruption to push forward the concept of dynamic service orchestration etc., which is good.

There is starting to be buzz around Internet of Things.  Mark Burgess (CFEngine, author of “In Search Of Certainty”) did a presentation on IoT and a more distributed model of monitoring and computation. Worth looking at, and it’s becoming more a part of mainstream computing (“engineering” tech and “IT” tech split off from each other 15 years ago for whatever reason and are just now joining forces again). Since we Agile Admins all had worked at National Instruments and had tried to get them onto the IoT bandwagon like 5 years ago, we grumped among each other about this.

There’s also strong interest in software defined networking (OpenDaylight, Cumulus). John Willis (@botchagalupe) waxed poetic on the topic and it fit into the general push towards making everything programmable.

There was strong and sustained interest (presentations, etc.) on STEM education and specifically on women in tech/getting more women into tech.

Keynotes

My Room At The Avatar

My Room At The Avatar

Video of these should be publicly available so you can watch them.

Jeff Dean of Google did a very interesting talk on making large scale services low latency that I recommend everyone view (video is at the link). Shared environments increase utilization but also congestion, exacerbated by large fanout systems - if a given system has services with only 1% 1 sec latency and have you to touch 100 services to finish your call, 63% of calls take more than a second. Traditional latency reduction uses techniques like differentiated service classes, breaking up large requests, managing background activity (rate limit, wait till low load). Tolerating faults is a lot like tolerating variability - extra resources make your system reliable – do the same with variability, but much lower timeframe. There’s two ways to do that…

  1. Cross Request Adaptation - examine recent behavior and make changes (load balance, scale) – low timescale, this tends to make the “next call” faster. Fine grained dynamic partitioning relies on equal sizes and constant load, but if you break up into 10-100 things a machine you can shed load more effectively. Selective replication, like in query system they make more copies of important docs. Use latency-induced probation via your load balancer, offload to other boxes, shadow stream to original, return to service when it’s better.
  2. Within Request Adaptation – make the call faster within the single call! Basically this is a series of refinements on “send the request two places.”  First he modeled sending the request again to another server if it didn’t return in an expected amount of time. You can get cuter, like by always sending to two destinations and having the one that starts working on it give a sideways “I’ve got it” to the other. His mathematical analysis says that you can cut latency dramatically for a very small increase in load, and not only that, but the response of a loaded cluster and an idle cluster become very similar (less dramatic spiking under load).

And I did one!  Just a 5 minute spot since Copperegg was a platinum sponsor; I talked about applying a Lean approach to implementing monitoring. It was called A 5 Minute Checklist For Application Monitoring and slides/video are at the link.  I also wrote a white paper to expand on it that’s available for download here.

Sessions

California Sushi

California Sushi

I went to a number of sessions that I enjoyed; here’s a quick breakdown of the ones I thought were winners.  I’ll try to find slides and link them where they exist. O’Reilly charges for the videos though.

Vladimir Vuskan’s workshop on ganglia. People like the gathering of mass metrics. They did rake him over the coals a bit on the 15s time resolution and the relatively primitive RRDTool graphs.  He had some interesting bits like a “check that a value is the same everywhere” alert for consistency. He also summed up “why we monitor” well – MTTD, MTTR, trending, learning.

Theo Schlossnagle’s presentation on Understanding Slowness. He recommended a system map as step 1 – high level box and line but low level with all versions, locations, and service connections. He also talked about going to histograms but less sophisticated users find those hard to understand, so displaying quantiles can be a happy medium. He sees three different tool spaces: observational, synthetic, and manipulation.

There was a good presentation by Dan Slimmon (video of same talk from Monitorama)on the math around false alarms, using the “sensitivity” and “specificity” terms from medicine. Here’s a quick reference on those and how you calculate a positive predictive value. Undetected outages are embarrassing so the response is to narrow the monitoring thresholds but this just generates more false alerts, aka “pagerrhea.” This segued into the discussion of using better means to detect deviation – hysteresis, moving thresholds like Holt-Winters, cross-correlation of metrics, Fourier transforms. You should alert on whether work is getting done, not on CPU or swap but on HTTP response time and requests per second. He wants “something like nagios but that separates detection from diagnosis.”

I also really appreciated the LinkedIn talk on technical debt. They admitted that several years ago, they were trying to keep up in the social world and just ground to a halt because of accumulated technical debt. They had to stop and take a bunch of time to fix it before they could move forward. Important takeaways included:

  • Technical debt comes small decision by small decision
  • Don’t wait for version n+1, fix it now
  • “One in a million” problems happen a lot at web scale
  • Cancerous workarounds are no good
  • Broken window syndrome – if things are broken, people will tend to leave things broken
  • Zombie tech will eat you
  • Use our cool rest.li REST framework!
  • Employee engagement drains KPIs
  • Strategies – recognize debt choices and decisions
  • Use new eyes – consultants, interns – to identify the “bad parts”
  • Measure the right things
  • Technical debt you can see is only the tip of the iceberg
  • Make active decisions otherwise in Soviet Russia, Decision Makes You!  (well, I added that last part)

The last really good one was about confirmation bias and monitoring. When dealing with metrics there are a lot of cognitive illusions – the anchoring effect (whatever it was recently before it deviated must have been right), the validity effect (a couple people told me that so it must be true), illusory correlation (looks like those happened around the same time), attitude polarization (round up the usual suspects). The way to combat this is with analysis. Rethink your data flow, validate your stats.  Use anomaly detection like the open sourced skyline and oculus to really detect correlations and deviations.

Though there weren’t as many breakthroughs this year, I appreciated the incremental uptick in wisdom about how to use what we have!

Social

Much of the benefit of conferences isn’t the sessions, it’s the great people you meet and share experiences with. Once you’ve been a couple years, you get to see old friends – though sadly none of our compatriots from Agile Admin alumni companies were there (National Instruments, Bazaarvoice, PowerReviews) we did get to see most of the “usual suspects” we get to see at these shows – we had the usual “hang out at the Hyatt bar fiesta” with Andrew Schafer, John Willis, Ben Rockwood, Cameron Haight and Jonah Kowall from Gartner, Gene Kim, and many more.  Notable in his absence was Patrick Debois who remained in Belgium, we all missed him.

If you went to Velocity this year, chime in below (especially if we met you there!).

1 Comment

Filed under Conferences, DevOps