Category Archives: Cloud

Cloud computing and all its permutations.

Our First Cloud Product Released!

Hey all, I just wanted to take a moment to share with you that our first cloud-based product just went live!  LabVIEW Web UI Builder is National Instruments’ first SaaS application.  It’s actually free to use, go to ni.com and “Try It Now”, all you have to do is make an account.  It’s a freemium model, so you can use it, save your code, run it, etc. all you want; we charge to get the “Build & Deploy” functionality that enables you to compile, download, and deploy the bundled app to an embedded device or whatnot.

Essentially it’s a Silverlight app (can be installed out of browser on your box or just launched off the site) that lets you graphically program test & measurement, control, and simulation type of programs.  You can save your programs to the cloud or locally to your own machine.  The programs can interact via Web services with anything, but in our case it’s especially interesting when they interact with data acquisition devices.  There’s some sample programs on that page that show what can be done, though those are definitely tuned to engineers…  We have apps internally that let you play frogger and duck hunt, or do the usual Web mashup kinds of things calling google maps apis.  So feel free and try out some graphical programming!

Cool technology we used to do this:

And it’s 100% DevOps powered.  Our implementation team consists of developers and sysadmins, and we built the whole thing using an agile development methodology.  All our systems are created by model-driven automation from assets and definitions in source control.  We’ll post more about the specifics now that we’ve gotten version 1 done!  (Of course, the next product is just about ready too…)

Leave a comment

Filed under Cloud, DevOps

LASCON 2010: Why The Cloud Is More Secure Than Your Existing Systems

Why The Cloud Is More Secure Than Your Existing Systems

Saving the best of LASCON 2010 for last, my final session was the one I gave!  It was on cloud security, and is called “Why The Cloud Is More Secure Than Your Existing Systems.”  A daring title, I know.

You can read the slides (sadly, the animations don’t come through so some bits may not make sense…).  In general my premise is that people that worry about cloud security need to compare it to what they can actually do themselves.  Mocking a cloud provider’s data center for not being ISO 27001 compliant or having a two hour outage only makes sense if YOUR data center IS compliant and if your IT systems’ uptime is actually higher than that.  Too much of the discussion is about the FUD and not the reality.  Security guys have this picture in their mind of a super whizbang secure system and judge the cloud against that, even though the real security in the actual organization they work at is much less.  I illustrate this with ways in which our cloud systems are beating our IT systems in terms of availablity, DR, etc.

The cloud can give small to medium businesses – you know, the guys that form 99% of the business landscape – security features that heretofore were reserved for people with huge money and lots of staff.  Used to be, if you couldn’t pay $100k for Fortify, for instance, you just couldn’t do source code security scanning.  “Proper security” therefore has an about $1M entry fee, which of course means it’s only for billion dollar companies.  But now, given the cloud providers’ features, and new security as a service offerings, more vigorous security is within reach of more people.  And that’s great -building on the messages in previous sessions from Matt’s keynote and Homeland Security’s talk, we need pervasive security for ALL, not just for the biggest.

There’s more great stuff in there, so go check it out.

1 Comment

Filed under Cloud, Conferences, Security

Cloud Security: a chicken in every pot and a DMZ for every service

There are a couple military concepts that have bled into technology and in particular into IT security, a DMZ being one of them. A Demilitarized Zone (DMZ) is a concept where there is established control for what comes in and what goes out between two parties and in military terms you can establish a “line of control” by using a DMZ. In a human-based DMZ, controllers of the DMZ make ingress (incoming) and egress (outgoing) decisions based on an approved list–no one is allowed to pass unless they are on the list and have proper identification and approval.

In the technology world, the same thing is done based on traffic between computers. Decisions to allow or disallow the traffic can be made based on where the traffic came from (origination), where it is going (destination) or even dimensions of the traffic like size, length, or even time of day. The basic idea being that all traffic is analyzed and is either allowed or disallowed based on determined rules and just like in a military DMZ, there is a line of control where only approved traffic is allowed to ingress or egress. In many instances a DMZ will protect you from malicious activity like hackers and viruses but it also protects you from other configuration and developer errors and can guarantee that your production systems are not talking to test or development tiers.

Lets look at a basic web tiered architecture. In a corporation that hosts its own website they will more than likely have the following four components: incoming internet traffic, a web server, a database server and an internal network. To create a DMZ or multiple DMZ instances to handle their web traffic, they would want to make sure that someone from the internet could only talk to the web server, they would also want to verify that only the web server can only talk to the database server and they would want to make sure that their internal network is inaccessible by the the web server, database server and the internet traffic.

Using firewalls, you would need to set up at least the below three firewalls to adequately control the DMZ instances:

1. A firewall between the external internet and the web server
2. A firewall in front of the internal network
3. A firewall between your web servers and database server

Of course these firewalls need to be set so that they allow (or disallow) only certain types of traffic. Only traffic that meets certain rules based on its origination, destination, and its dimensions will be allowed.

Sounds great, right? The problem is that firewalls have become quite complicated and now sometimes aren’t even advertised as firewalls, but instead are billed as a network-device-that-does-anything-you-want-that-can-also-be-a-firewall-too. This is due in part to the hardware getting faster, IT budgets shrinking and scope creep. The “firewall” now handles VPN, traffic acceleration, IDS duties, deep packet inspection and making sure your employees aren’t watching YouTube videos when they should be working. All of those are great, but it causes firewalls to be expensive, fragile and difficult to configure.  And to have the firewall watching all ingress and egress points across your network, you usually have to buy several devices to scatter throughout your topology.

Additionally, another recurring problem is that most firewall analysis and implementation is done with an intent to secure the perimeter. Which make sense, but it often stops there and doesn’t protect the interior parts of your network. Most IT security firms that do consulting and penetration tests don’t generally come through the “front door” by that I mean, they don’t generally try to get in through the front-facing web servers but instead they will go through other channels such as dial-in, wireless, partner, third-party services, social engineering, FTP servers or that demo system that was setup back 5 years ago that no hasn’t been taken down–you know the one I am talking about. Once inside, if there are no well-defined DMZs, then it is pretty much game-over because at that point there are no additional security controls. A DMZ will not fix all your problems, but it will provide an extra layer of protection that could protect you from malicious activity. And like I mentioned earlier it can also help prevent configuration errors crossing from dev to test to prod.

In short, a DMZ is a really good idea and should be implemented for every system that you have. The most optimal DMZ would be a firewall in front of each service that applies rules to determine what traffic is allowed in and out. This, however, is expensive to set up and very rarely gets implemented. That was the old days, this is now and good news, the cloud has an answer.

I am most familiar with Amazon Web Services so we will include an example of how do this with security groups from an AWS perspective. The following code creates a web server group and a database server group and allows the web server to talk to the database on port 3306 only.

ec2-add-group web-server-sec-group -d "this is the group for web servers" #This creates the web server group with a description
ec2-add-group db-server-sec-group -d "this is the group for db server" #This creates the db server group with a description
ec2-authorize web-server-sec-group -P tcp -p 80 -s 0.0.0.0/0 #This allows external internet users to talk to the web servers on port 80 only
ec2-authorize db-server-sec-group --source-group web-server-sec-group --source-group-user AWS_ACCOUNT_NUMBER -P tcp -p 3306 #This allows only traffic from the web server group on port 3306 (mysql) to ingress

Under the above example the database server is in a DMZ and only traffic from the web servers are allowed to ingress into it. Additionally, the web server is in a DMZ in that it is protected from the internet on all other ports except for port 80. If you were to implement this for every role in your system, you would be in effect implementing a DMZ between each layer and would provide excellent security protection.

The cloud seems to get a bad rap in terms of security. But I would counter that in some ways the cloud is more secure since it lets you actually implement a DMZ for each service. Sure, they won’t do deep packet analysis or replace an intrusion detection system, but they will allow you to specifically define what ingresses and egresses are allowed on each instance.  We may not ever get a chicken in every pot, but with the cloud you can now put a DMZ on every service.

Leave a comment

Filed under Cloud, Security

Austin Cloud Computing Users Group Meeting Sep 21

The next meeting of Austin’s cloud computing trailblazers is next Tuesday, Sep. 21.  Event details and signup are here.  Some gentlement from Opscode will be talking about cloud security, and then we’ll have our usual unconference-style discussions.  If you haven’t, join the group mailing list!  It’s free, you get fed, and you get to talk with other people actually working with cloud technologies.

Leave a comment

Filed under Cloud

Austin Cloud Computing Users Group Meeting Tomorrow

The second meeting of the Austin Cloud Computing Users Group is tomorrow, Tuesday August 24, from 6 to 8 PM, hosted by Pervasive Software in north Austin.

Michael Cote of Redmonk will be talking about recent cloud trends.  Opsource is sponsoring food and drinks.  We’ll have some lightning talks or unconference sessions too, depending.

It’s a great group of folks, so if you are into cloud computing come by and share.  If you plan to attend, please RSVP on Eventbrite here.

2 Comments

Filed under Cloud

Austin Cloud Computing Users Group!

The first meeting of the Austin Cloud Computing Users Group just happened this Tuesday, and it was a good time!  This new effort is kindly hosted by Pervasive Software [map].  We had folks from all over attend – Pervasive (of course), NI (4 of us went), Dell, ServiceMesh, BazaarVoice, Redmonk, and Zenoss just to name a few.  There are a lot of heavy hitters here in Austin because our town is so lovely!

We basically just introduced ourselves (there were like 50 people there so that took a while) and talked about organization and what we wanted to do.

The next meeting is planned already; it will be at Pervasive from 6:00 to 8:00 PM on Tuesday, August 24.  Michael Coté of Redmonk will be speaking on cloud computing trends.  Meeting format will be a presentation followed by lightning talks and self-forming unconference sessions.  Companies will be buying food and drink for the group in return for a 5 minute “pimp yourself” slot.  Mmm, free dinner.

There is a Google group/mailing list you can join – austin-cug@googlegroups.com.  There’s already some good discussion underway, so join in, and come to the next meeting!

Leave a comment

Filed under Cloud, DevOps

Give Me An API Or Give Me Death

Catchy phrase courtesy #meatcloud…   But it’s very true.  I am continuously surprised by the chasm between the “old generation” of software that jealously demands its priests stay inside the temple, and the “new generation” that lets you do things via API easily.  As we’ve been building up a new highly dynamic cloud-based system, we’ve been forced to strongly evaluate our toolset and toss out products with strong “functionality” that can’t be managed well in an automated infrastructure.

Let me say this.  If your product requires either a) manual GUI operations or b) a config file alteration and restart, it is not suitable for the new millenium.  That’s just a fact.

We needed an LDAP server to hold our auth information.  It’s been a while since I’ve done that, so of course OpenLDAP immediately came to mind.  So we tried it.  But what happens when you want to dynamically add a new replication slave?  Oh, you edit a bunch of config files and restart.  Well, sure, I’d like my auth system to be offline all the time, but…  So we tried OpenDS.  The most polished thing in the world?  No.  Does it have all the huge amount of weird functionality I probably won’t use anyway of OpenLDAP?  No.  But it does have an administration interface that you can issue directives to and have them take hold in realtime.  “Hey dude start replicating with that new box over there OK?”  “Sir, yes sir.”  “Outstanding.”  And since it’s Java, I can deploy it easily to targets in an automated fashion.  And even though the docs aren’t all up to date and sometimes you have to go through their interactive command line interface to do something – once you do it, the interface can be told to spit out the command-line version of that so you can automate it.  Sold!

The monitoring world is like this too.  Oh, we need an open source monitoring system?  Like everyone else, Nagios comes first to mind.  But then you try to manage a dynamic environment with it.  Again, their “solution” is to edit config files and restart parts of the system.  I don’t know about you, but my monitoring systems tend to be running a LOT of tests at any given time and hiccups in that make Baby Jesus (and frequently whoever is on call) cry.  So we start looking at other options.  “Well, you just come here in the UI and click to add!” the sales rep says proudly.  “Click,” goes the phone.  We end up looking at stuff like Zabbix, Zenoss, etc.  In fact, at least for the short term, we are using Cloudkick.  In terms of the depth of monitoring, it supports 1/100 of what most monitoring solutions do.  System stats mostly; there’s plugins for LDAP and mySQL but that’s about it, the rest is “here’s where you can plug in your own custom agent plugin…”  But, as my systems come up they get added to their interface automatically, tagged with my custom namespace.  And I’d rather have my systems IN a monitoring system that will give me 10 metrics than OUTSIDE a monitoring system that would give me 1000.

It’s also about agility.  We are trying to get these products to market way fast.  We don’t have time to become high priests of the “OpenLDAP way of doing things” or the “Nagios way of doing things.”  We want something that works upon install, that you can make a call to (ideally REST-based, though command line is acceptable in a pinch, and if there’s an iPhone app for it you get extra credit) in order to tell it what to do.  Each of these items is about 1/100 of everything that needs to go into a full working system, and so if I have to spend more than a week to get you working and integrate with you – it’s a dealbreaker.  You got away with that back when there weren’t other choices, but now in just about every sector there’s someone who’s figured out that ease of access and REST API for integration plus basic functionality is as valuable as loads of “function points” plus being hellishly crufty.

Heck, we ended up developing our own cloud management stuff because when we looked at the RightScales and whatnot of the world, they did a great job of managing the cloud providers’ direct APIs for you but didn’t then offer an API in return…  And that was a dealbreaker.  You can’t automate end to end if you come smacking up against a GUI.  (Since, RightScale has put out their own API in beta.  Good work guys!)

More and more, people are seeing that they need and want the “API way.”  If you don’t provide that, then you are effectively obsolete.  If I can’t roll up a new system – either with your software or something your software needs to be looking at/managing – and have it join in with the overall system with a couple simple API commands, you’re doing it wrong.

Leave a comment

Filed under Cloud, DevOps

Velocity 2010 – Performance Indicators In The Cloud

Common Sense Performance Indicators in the Cloud by Nick Gerner (SEOmoz)

SEOmoz has been  EC2/S3 based since 2008.  They scaled from 50 to 500 nodes.  Nick is a developer who wanted him some operational statistics!

Their architecture has many tiers – S3, memcache, appl, lighttpd, ELB.  They needed to visualize it.

This will not be about waterfalls and DNS and stuff.  He’s going to talk specifically about system (Linux system) and app metrics.

/proc is the place to get all the stats.  Go “man proc” and understand it.

What 5 things does he watch?

  • Load average – like from top.  It combines a lot of things and is a good place to start but explains nothing.
  • CPU – useful when broken out by process, user vs system time.  It tells you who’s doing work, if the CPU is maxed, and if it’s blocked on IO.
  • Memory – useful when broken out by process.  Free, cached, and used.  Cached + free = available, and if you have spare memory, let the app or memcache or db cache use it.
  • Disk – read and write bytes/sec, utilization.  Basically is the disk busy, and who is using it and when?  Oh, and look at it per process too!
  • Network – read and write bytes/sec, and also the number of established connections.  1024 is a magic limit often.  Bandwidth costs money – keep it flat!  And watch SOA connections.

Perf Monitoring For Free

  1. data collection – collectd
  2. data storage- rrdtool
  3. dashboard management – drraw

They put those together into a dashboard.  They didn’t want to pay anyone or spend time managing it.  The dynamic nature of the cloud means stuff like nagios have problems.

They’d install collectd agents all over the cluster.  New nodes get a generic config, and node names follow a convention according to role.

Then there’s a dedicated perf server with the collectd server, a Web server, and drraw.cgi.  In a security group everyone can connect in to.

Back up your performance data- it’s critical to have history.

Cloudwatch gives you stuff – but not the insight you have when breaking out by process.  And Keynote/Gomez stuff is fine but doesn’t give you the (server side) nitty gritty.

More about the dashboard. Key requirements:

  • Summarize nodes and systems
  • Visualize data over time
  • Stack measurements per process and per node
  • Handle new nodes dynamically w/o config chage

He showed their batch mode dashboard.  Just a row per node, a metric graph per column.  CPU broken out by process with load average superimposed on top.  You see things like “high load average but there’s CPU to spare.”  Then you realize that disk is your bottleneck in real workloads.  Switch instance types.

Memory broken out by process too.  Yay for kernel caching.

Disk chart in bytes and ops.  The steady state, spikes, and sustained spikes are all important.

Network – overlay the 95th percentile cause that’s how you get billed.

Web Server dashboard from an API server is a little different.

Add Web requests by app/request type.  app1, app2, 302, 500, 503…  You want to see requests per second by type.

mod_status gives connections and children idleness.

System wide dashboard.  Each graph is a request type, then broken out by node.  And aggregate totals.

And you want median latency per request.  And any app specific stuff you want to know about.

So get the basic stats, over time, per node, per process.

Understand your baseline so you know what’s ‘really’ a spike.

Ad hoc tools -try ’em!

  • dstat -cdnml for system characteristics
  • iotop for per process disk IO
  • iostat -x 3 for detailed disk stats
  • netstat -tnp for per process TCP connection stats

His slides and other informative blog posts are at nickgerner.com.

A good bootstrap method… You may want to use more/better tools but it’s a good point that you can certainly do this amount for free with very basic tooling, so something you pay for best be better! I think the “per process” intuition is the best takeaway; a lot of otherwise fancy crap doesn’t do that.

But in the end I want more – baselines, alerting, etc.

Leave a comment

Filed under Cloud, Conferences, DevOps

Velocity 2010 – Grendel

Protecting “Cloud” Secrets With Grendel by Sam Quigley (Square, Inc) and Coda Hale (Yammer, Inc.)

Everyone stores private data.  Passwords, credit cards, documents, etc.  But also personal conversations, personal histories, usage patternns – that’s all private too.  So you store private info – yes you – so how do you protect it?  Firewalls and VPNs?  Passwords?  Bah.  They are useful against last decade’s attacks.

Application level attacks are the new hotness – see the OWASP Top 10.  What you want to do is encryption.  But that’s complex.  Veracode has analyzed a lot of apps and crypto problems are the #1 problem.

What do we do?  Here’s some ideas.

Grendel

It is a secure document storage system. Open and does minimal/simple.  It does data storage, authentication, and access control using the OpenPGP message format and a RESTful interface, it’s in Java, and uses a normal DB backend.

OpenPGP – mature, flexible.  It’s for confidentiality and integrity.  It uses asymmetric keys.  The keys are stored encrypted with passphrases.  The keys are used to encrypt documents to one or more recipients.

REST API – http native.  Why REST?  For all the reasons everyone uses REST.  Ubiquitous, well understood, simple, easily debugged (charles), free features.

Java 1.6 + RDBMS.  Java because it’s fast and stable and well understood.  Uses hibernate.  RDBMS because you already have one.

Grendel is simple.  One config file.  DB location and password and some c3p0 stuff.

java -jar grendel.jar schema -c database.properties

generates a schema.  Three tables; users, documents, and links.

java -jar grendel.jar server -c database.properties -p 8080

starts it.

The API has users, docs, links, and linked docs.  JSON based.

You can create a user, which makes a new key set behind the scenes.

You can store a document.  PUT /users/name/documents/docname with a basic auth header.  It decrypts the user’s keys, signs and encrypts the doc, and stores it.

GET /users/name/documents gets you a JSON list.  Or get the document and you get the document (duh).

Then you can link the document to another user to share it with them.

So what’s the big deal?

Self defending data.  The data itself enforces the access control rules.  And business logic is enforced with math.

He didn’t even mention the brilliance of this related to scenarios like things like subpoenas causing Amazon to give up your S3 data to people…

Authentication done right.  It’s hard to do it right.  Adaptive hashing.  A centralized service model.  Resistant to modern attacks.

It makes it “sudo for the Web.”  You can grant long lived session coolies, and re-auth for privileged access.  Yeah, we do that in general not with encryption…  Like Amazon.com remembers you but when it’s purchase time you have to reauth securely.

It also mitigates XSS/CSRF attacks, kinda.

This creates a privacy wall.  You the admin are locked out of the data.  Insider threat defeated.

In the future…

Support for sessions.  OAuth 2.0.  And spreading the idea in general!

How is this better than symmetric encryption with the user’s password?  Since you’re proxying it anywhere.  Because you can’t share then.

I guess one downside is that you can’t see inside the docs to search index, etc.

You could use client side certs instead of passwords right?  No.

Does it have support for password change?  Yes.

I personally am psyched about this – I think we have a product underway that could really benefit from using it.

Leave a comment

Filed under Cloud, Conferences, DevOps, Security

Velocity 2010 – Dueling Cloud Management Suppliers

Two cloud systems management suppliers talk about their bidness!  My comments in italics.

Cloud Autoscaling in Enterprise Computing by George Reese (enStratus Networks LLC)

How the Top Social Games Scale on the Cloud by Michael Crandell (RightScale, Inc)

I am more familiar with RightScale, but just read Reese’s great Cloud Application Architectures book on the plane here.  Whose cuisine will reign supreme?

enStratus

Reese starts talking about “naive autoscaling” being a problem.  The cloud isn’t magic; you have to be careful.  He defines “enterprise” autoscaling as scaling that is cognizant of financial constraints and not this hippy VC-funded twitter type nonsense.

Reactive autoscaling is done when the system’s resource requirements exceed demand.  Proactive autoscaling is done in response to capacity planning – “run more during the day.”

Proactive requires planning.  And automation needs strict governors in place.

In our PIE autoscaling, we have built limits like that into the model – kinda like any connection pool.  Min, max, rate of increase, etc.

He says your controls shouldn’t be all “number of servers,” but be “budget” based.  Hmmm.  That’s ideal but is it too ideal?  And so what do you do, shut down all your servers if you get to the 28th of the month and you run out of cash?

CPU is not a scaling metric. Have better metrics tied to things that matter like TPS/response time.  Completely agree there; scaling just based on CPU/memory/disk is primitive in the extreme.

Efficiency is a key cloud metric.  Get your utilization high.

Here’s where I kinda disagree – it can often be penny wise and pound foolish.  In the name of “efficiency” I’ve seen people put a bunch of unrelated apps on one server and cause severe availability problems.  Screw utilization.  Or use a cloud provider that uses a different charging model – I forget which one it was, but we had a conf call with one cloud provider that only charged on CPU used, not “servers provisioned.”

Of course you don’t have to take it to an extreme, just roll down to your minimum safe redundancy number on a given tier when you can.

Security – well, you tend not to do some centralized management things (like add to Active Directory) in the cloud.  It makes user management hard.  Or just makes you use LDAP, like God intended.

Cloud bursting – scaling from on premise into the cloud.

Case study – a diaper company.  Had a loyalty program.  It exceeded capacity within an hour of launch.  Humans made a scaling decision to scale at the load balancing tier, and enStratus executed the auto-scale change.  They checked it was valid traffic and all first.

But is this too fiddly for many cases?  If you are working with a “larger than 5 boxes” kind of scale don’t you really want some more active automation?

RightScale

The RightScale blog is full of good info!

They run 1.2 million cloud servers!  hey see things like 600k concurrent users, 100x scaling in 4 days, 15k instances, 1:2000 management ratio…

Now about gaming and social apps.  They power the top 10 Facebook apps.  They are an open management environment that lives atop the cloud suppliers’ APIs.

Games have a natural lifecycle where they start small, maybe take off, get big, eventually taper off.  It’s not a flat demand curve, so flat supply is ‘tarded.

During the early phase, game publishers need a cheap, fast solution that can scale.  They use Chef and other stuff in server templates for dynamic boot-time configuration.

Typically, game server side tech looks like normal Web stuff!  Apache+HAproxy LB, app servers, db cache (memcached), db (sharded mySQL master/slave pairs).  Plus search, queues, admin, logs.

Instance types – you start to see a lot of larger instances – large and extra large.  Is this because of legacy comfort issues?  Is it RAM needs?

CentOS5 dominates!  Generic images, configured at boot.  One company rebundles for faster autoscale.  Not much ubuntu or Windows.  To be agile you need to do that realtime config.

A lot of the boxes are used for databases.  Web/app and load balancing significant too.  There’s a RightScale paper showing a 100k packets per second LB limit with Amazon.

People use autoscaling a lot, but mainly for web app tier.  Not LBs because the DNS changing is a pain.  And people don’t autoscale their DBs.

They claim a lot lower human need on average for management on RightScale vs using the APIs “or the consoles.”  That’s a big or.  One of our biggest gripes with RightScale is that they consume all those lovely cloud APIs and then just give you a GUI and not an API.  That’s lame.  It does a lot of good stuff but then it “terminates” the programmatic relationship. [Edit: Apparently they have a beta API now, added since we looked at them.]

He disagrees with Reese – the problem isn’t that there is too much autoscaling, it’s that it has never existed.  I tend to agree. Dynamic elasticity is key to these kind of business models.

If your whole DB fits into memcache, what is mySQL for?  Writes sometimes?  NoSQL sounds cool but in the meantime use memcache!!!

The cloud has enabled things to exist that wouldn’t have been able to before.  Higher agility, lower cost, improved performance with control, anew levels of resiliency and automation, and full lifecycle support.

1 Comment

Filed under Cloud, Conferences, DevOps