Check out this article by @victortrac on High Scalability on how we have scaled our infrastructure at Bazaarvoice to be serving out a billion product reviews a day!
This last week at the Agile Austin DevOps SIG, our topic was simple – “A DevOps Thanksgiving.” We all shared what we’re thankful for from the DevOps world this year – things that have made our lives better.
It was a nice and refreshing discussion! People mentioned the things making their lives better. Group members expressed their thanks for such diverse things as DevOps Weekly, rspec-puppet, The Phoenix Project, Vagrant, Docker, test-kitchen with serverspec and bats, provisioned IOPS in AWS, DevOps Cafe, The Ship Show, increasing crossplatform support in DevOps tools and thinking, DevOps tracks springing up at conferences like Agile 2013 and AppSec, DevOpsDays… Thanks to all the people who put in lots of their hard work to make them all possible!
In retrospect we have a lot to be thankful for. Even though the techno-hipsters don’t even want to say the word “DevOps” any more, it’s a very real change bringing better things to our tools, products, and even lives. I know I’ve seen a lot of change in the teams I’ve worked with that have implemented it – fewer “all hands overnight releases,” less psychotic oncall, less inter-group hatefulness – DevOps has brought us all a lot of good things, and it’s just starting to take hold out there in the industry.
How about you? What DevOps thing were you thankful for this year? Add into the comments here, blog it up yourself, tweet it (I suggest #devopsthanksgiving as the hashtag)… Spread the thanks!
One of the interesting sessions at ReInvent was a fireside chat with Werner Vogels., where CEO’s or CTO’s of different companies/startups who use AWS talked about their applications/platforms and what they liked and wanted form AWS. It was a 3 part series with different folks, and I was able to attend the 1st one, but I’m guessing videos are available for the others online. Interesting session, giving the audience a window into the way C level people think about problems and solutions…
First up, the CTO of mongodb…
Lots of people use mongo to store things like user profiles etc for their applications. Mongo performance has gotten a lot better because of ssd’s
Recently funded 150 million, and wanting to build out a lot of tools to be able to administer mongo better.
Apparently being a mongodb dba is a really high paying job these days!
User roles may be available in mongo next year to add more security.
Werner and Eliot want to work together to bring a hosted version of mongo like RDS.
Next up twilio’s Jeff Lawson
Jeff is ex amazon.
Software people want building blocks and not some crazy monolithic thing to solve a problem. Telecom had this issue, and that is why I started Twilio.
Everyone is agile! We don’t have answers up front, but we figure out these answers as we go.
Started with voice, then moved to SMS followed by a global presence. Most customers of ours wanted something that didn’t want boundaries and just wanted an API to communicate with their customers.
Werner: It’s hard to run an API business. Tell us more…
Lawson: It is really hard. Apis are kinda like webapps when it comes to scaling. REST helps a lot from this perspective. Multi tenancy issues gets amplified when you have an API business.
Twilio apparently deploys 20 times a day. Aws really helps with deployment because you can bring brand new environments that look exactly like prod and then tear it down when things aren’t needed.
When it comes to api’s, we write the documentation first and show our customers first before actually implementing the API. Then iterate iterate iterate on the development.
Jeff asks: Make it easier to make vpc up and running.
Next up: Valentino with adroll (realtime bidding)
There’s a data collection pipe which gets like 20 tb of data everyday.
Latency is king: Typically latency is like 50ms and 100ms. This is still a lot for us. I wish we had more transparency when it comes to latency inside aws and otherwise…
Why dynamo db? Didn’t find something simple at the time, and it was nice to be able to scale something without having to worry about it. We had 0 ops people at the time to work on scaling at the time.
Read write rates: 80k reads per second (not consistent), 40k writes per second.
Why erlang? You’re a python god.
I started working on Python with the twisted framework. But I realized that Python didn’t fit our use case well; the twisted system worked just as well but it would be complicated to manage it and needed a bit of hacks..
Today it would be hard to pick between erlang and go….
I didn’t cover the day 1 keynote, but fortunately it can be found here. The day 2 keynote was a lot more technical and interesting though. Here are my notes from it:
First, we began by talking about how aws plans its projects.
Before any project is started, and teams are in the brainstorming phase. A few key things are always done.
- Meeting minutes
- Figure out the ux
- Before any code is written
“2 Pizza Teams”: Small autonomous teams that had roadmap ownership with decoupled lauch schedules.
Get the functionality in the hands of customers as soon as possible. It may be feature limited, but it’s in the hands of customers so that they can get feedback as soon as possible. Iterate iterate iterate based on feedback. Different from the old guard where everything is engineering driven and it is unnecessarily complex.
Netflix is on stage and we’re taking about the Netflix cloud prizes and talking about the enhancements to the different tools…looks pretty cool, and will need to check them out. There are 14 chaos monkey “tests” to run now instead of just 1 from before.
Werner is back is breaking down the different facets that AWS focuses on:
- Performance- measure everything; put performance data in log files that can be mined.
Illya sukhar CEO from Parse is on stage now (platform for mobile apps)
-parse data: store data; it’s 5 lines of code instead of a bunch of code.
Parse started with 1 aws instance
From 0-180,000 apps
180,000 collections in mongodb; shows differences between pre and post piops
IAM and IAM roles to set boundaries on who can access what.
How to do this from a db perspective?
Apparently you can have fine grained access controls on dynamodb instead of writing your own code.
Each data block is encrypted in redshift
Talking about how customers are using the spot instances to save $.
We transfer usecase, who take care of transferring large files.
Airbnb on stage with mike curtis, VP of engineering
-350k hosts around the world
-4 millions guests (jan 2013)
-9 million guests today.
Host of aws services
1k ec2 instances
Million RDS rows
50tb for photos in s3
“The ops team at Airbnb is with a 5 person ops team.”
Helps devote resources to the real problem.
Dropcam came on stage after that to talk about how they use the AWS platform. Nothing too crazy, but interestingly more inbound videos are sent to dropcam than YouTube!
They keynote ended with an Amazon Kinesis demo (and a deadmau5 announcement for the replay party), which on the outside looks like a streaming API and different ways to process data on the backend. A prototype of streaming data from twitter and performing analytics was shown to demonstrate the service.
- RDS for PostgreSQL
- New instance types-i2 for much better io performance
- Dynamo db- global secondary indexes!!
- Federation with saml 2.0 for IAM
- Amazon RDS- cross region read replicas!
- G2 instances for media and video intensive application
- C3 instances are new with fastest processors- 2.8 gig intel e5 v2
- Amazon kinesis- real time processing, fully managed. It looks like this will help you solve issues of scalability when you’re trying to build realtime streaming applications. It integrates with storage and processing services.
Incase you want to watch it, the day 2 keynote is here: http://www.youtube.com/watch?v=Waq8Y6s1Cjs
And also, the day 1 keynote: http://www.youtube.com/watch?v=8ISQbdZ7WWc
This was the first talk by @simon_elisha I went to at ReInvent, and was a packed room. It was targeted towards developers going from inception of an app to growing it to 10 million users. Following are the notes I took…
- We will need a bigger box is the first issue, when you start seeing traffic to an application. Single box is an anti pattern because of no failover etc. move out your db from the web server etc…you could use RDS or something too.
- SQL or NoSQL?
Not a binary decision; maybe use both? A blended approach can reduce technical debt. Maybe just start with SQL because it’s familiar and there are clear patterns for scalability. Nosql is great for super low latency apps, metadata data sets, fast lookups and rapid ingesting data.
So for 100 users…
You can get by using route53, ELB, multiple web instances.
For 10000 users…
- Use cloud front to cache any static assets.
- Get your session state out of the webservers. Session state could be stored in dynamo db because it’s just unrelated data.
- Also might be time for elastic cache now which is just hosted redis or memcached.
Min, max servers running in multiple az zones. AWS makes this really simple.
If you end up at the 500k users situation you probably really want:
- metrics and alarms
- automated builds and deploys
- centralized logging
must haves for log metrics to collect:
- host level metrics
- aggregate level metrics
- log analysis
- external site performance
Use a product for this, because there are plenty available, and you can focus on what you’re really trying to accomplish.
Create tools to automate so you save your time especially to manage your time. Some of the ones that you can use are: elastic beanstalk, aws opsworks more for developers and cloud formation and raw ec2 for ops. The key is to be able to repeat those deploys quickly. You probably will need to use puppet and chef to manage the actual ec2 instances..
Now you probably need to redesign your app when you’re at the million user mark. Think about using a service oriented architecture. Loose coupling for the win instead of tight coupling. You can probably put a queue between 2 pieces
Key tip: don’t reinvent the wheel.
Example of what to do when you have a user uploading a picture to a site.
Simple workflow service
- workers and deciders: provides orchestration for your code.
When your data tier starts to break down 5-10 mill users
Split by function or purpose
Gotcha- You will have issues with join queries
This works well for one table with billions of rows.
Gotcha- operationally confusing to manage
- shift to nosql
Sorta similar to federation
Gotcha- crazy architecture change. Use dynamo db.
Last week I attended AWS ReInvent in Las Vegas. It was the largest conference I’ve been to with 9000 people, and a crazy number of sessions. When I was trying to decide what sessions to go to, I realized I had multiple conflicts at every slot (a good problem to have). It was also one of the funnest conferences I’ve been to, and I’ll be back next year (a bit more prepared next time around).
I’ll post about the sessions I went to, but the following are my favorite highlights from the conference:
- Day 2 keynote with Werner Vogels: After a more marketing and “C” centric keynote on day 1, the day 2 keynote was tuned more to the large developer crowd in the audience, and I left inspired. Check out all my notes here.
- Expo Hall: Holy cow! I got tired after walking around just 1/2 this hall. According to the booklet, there were maybe over 170 sponsors, and it took a while to walk through and check out what everyone was doing. The expo hall was also packed the first couple of days, so I went on the 3rd day when things were a lot quieter (pro tip: If you want the best swag, go the first day), but it also gave me a chance to talk to more of the folks in a leisurely manner! My 2 favorite highlights about the expo hall were Datadog (Most enthusiastic even on day 3), and Cloudability (who knew that I was a customer of theirs even though I didn’t realize another team at Mentor used the product; I thought that was pretty awesome!).
- Crazy number of sessions: I’m glad these are all on YouTube now. I hear the slides are also going to be online in a bit. This will give me a way to catch up on the sessions that I missed out on.
- AWS Hands on labs: This was pretty cool! You could skip a session or two and do a hands on lab on an AWS technology. I spent some time doing a hands on learning AWS beanstalk, and it was totally worthwhile.
- Day 1 (Tuesday): I only got in on Tuesday, but next time I’ll need to register in time to be a part of the hackathon or gameday. I talked to a bunch of folks who attended these, and they ended up having a great time at both of these. The Gameday was pretty cool as well and was targeted at DevOps folks. You ended up forming a team with a bunch of other folks and had to build an application infrastructure that was resilient to any kind of breakage. Then, you’d swap your credentials with another team, and they would try to break your infrastructure; you can imagine how this would end up being entertaining!
- Meeting up with folks, and catching up with people I hadn’t seen in a while.
- VEGAS! It was good to not lose at the roulette tables this time around
A lot of developer friends commented that the talks were light on technical side of things, which I thought was true; the way I got more out of these was actually talking to the product managers and the customer at the end of the talk to ask and understand some of the more technical concepts. This is true for most conferences, but was especially true for this one.
Stay tuned for a bunch of post conference session updates!
Jason Chan (@chanjbs) is an Engineering Director of the Cloud Security team at Netflix.
Tell me about your current gig!
I work on the Cloud Security team at Netflix, we’re responsible for the security of the streaming service at Netflix. We work with some other teams on platform and mobile security.
What are the biggest threats/challenges you face there?
Protecting the personal data of our members of course. Also we have content we want to protect – on the client side via DRM, but mainly the pipeline of how we receive the content from our studio partners. Also, due to the size of the infrastructure, its integrity – we don’t want to be a botnet or have things injected to our content that can our clients.
How does your team’s approach differ from other security teams out there?
We embody the corporate culture more, perhaps, than other security teams do. Our culture is a big differentiator between us and different companies. So it’s very important that people we hire match the culture. Some folks are more comfortable with strong processes and policies with black and white decisions, but here we can’t just say now, we have to help the business get things done safely.
You build a security team and you have certain expertise on it. It’s up to the company how you use that expertise. They don’t necessarily know where all the risk is, so we have to provide objective guidance and then mutually come to the right decision of what to do in a given situation.
Tell us about how you foster your focus on creating tools over process mandates?
We start with recruiting, to understand that policy and process isn’t the solution. Adrian [Cockroft] says process is usually organizational scar tissue. By doing it with tools and automation makes it more objective and less threatening to people. Turning things into metrics makes it less of an argument. There’s a weird dynamic in the culture that’s a form of peer pressure, where everyone’s trying to do the right thing and no one wants to be the one to negatively impact that. As a result people are willing to say “Yes we will” – like, you can opt out of Chaos Monkey, but people don’t because they don’t want to be “that guy.”
We’re starting to look at availability in a much more refined way. It’s not just “how long were you down.” We’re establishing metrics over real impact – how many streams did we miss? How many start clicks went unfulfilled. We can then assign rough values to each operation (it’s not perfect, but based on shared understanding) and then we can establish real impact and make tradeoffs. (It’s more story point-ish instead of hard ROI). But you can get what you need to do now vs what can wait.
Your work – how much is reactive versus roadmapped tool development?
It’s probably 50/50 on our team. We have some big work going on now that’s complex and has been roadmapped for a while. We need to have bandwidth as things pop up though, so we can’t commit everyone 100%. We have a roadmap we’ve committed to that we need to build, and we keep some resource free so that we can use our agile board to manage it. I try to build the culture of “let’s solve a problem once,” and share knowledge, so when it recurs we can handle it faster/better. I feel like we can be pretty responsive with the agile model, our two week sprints and quarterly planning give us flexibility. We get more cross-training too, when we do the mid-sprint statuses and sprint meetings. We use our JIRA board to manage our work and it’s been very successful for us.
What’s it like working at Netflix?
It’s great, I love it. It’s different because you’re given freedom to do the right thing, use your expertise, and be responsible for your decisions. Each individual engineer gets to have a lot of impact on a pretty large company. You get to work on challenging problems and work with good colleagues.
How do you conduct collaboration within your team and with other teams?
Inside the team, we instituted once a week or every other week “deep dives” lunch and learn presentation of what you’re working on for other team members. Cross-team collaboration is a challenge; we have so many tools internally no one knows what they all are!
You are blazing trails with your approach – where do you think the rest of the security field is going?
I don’t know if our approach will catch on, but I’ve spent a lot of my last year recruiting, and I see that the professionalization of the industry in general is improving. It’s being taught in school, there’s greater awareness of it. It’s going to be seen as less black magic, “I must be a hacker in my basement first” kind of job.
Development skills are mandatory for security here, and I see a move away from pure operators to people with CS degrees and developers and an acceleration in innovation. We’ve filed three patents on the things we’ve built. Security isn’t’ a solved problem and there’s a lot left to be done!
We’re working right now on a distributed scanning system that’s very AWS friendly, code named Monterey. We hope to be open sourcing it next year. How do you inventory and assess an environment that’s always changing? It’s a very asynchronous problem. We thought about it for a while and we’re very happy with the result – it’s really not much code, once you think the problem through properly your solution can be elegant.