Author Archives: Ernest Mueller

Ernest Mueller's avatar

About Ernest Mueller

Ernest is the VP of Engineering at the cloud and DevOps consulting firm Nextira in Austin, TX. More...

LASCON 2010: Mitigating Business Risks With Application Security

Mitigating Business Risks With Application Security

This talk was by Joe Jarzombek, Department of Homeland Security.  Normally I wouldn’t go to a management-track session called something like this, when I looked at the program this was my third choice out of all three tracks.  But James gave me a heads up that he had talked with Joe at dinner the previous night and he was engaging and knew his stuff, and since there were plenty of other NI’ers there to cover the other sessions, I took a chance, and I wasn’t disappointed!

From a pure “Web guy” standpoint it wasn’t super thrilling, but in my National Instruments hat, where we make hardware and software used to operate large hadron colliders and various other large scale important stuff where you would be very sad if things went awry with it, and by sad I mean “crushed to death,” it was very interesting.

Joe runs the DHS National Cyber Security Division’s new Software Assurance Program.  It’s a government effort to get this damn software secure, because it’s pretty obvious that events on a 9/11 kind of scale are more and more achievable via computer compromise.

They’re attempting to leverage standards and, much like OWASP’s approach with the Web security “Top 10,” they are starting out by pushing on the Top 25 CWE (Common Weakness Enumeration) errors in software.  What about the rest?  Fix those first, then worry about the rest!

Movement towards cloud computing has opened up people’s eyes to trust issues.  The same issues are relevant to every piece of COTS software you get as part of your supply chain!  It requires a profound shift from physical to virtual security.

“We need a rating scheme!”  Like food labels, for software.  They’re thinking about it in conjunction with NIST and OWASP as a way to raise product assurance expectations.

He mentioned that other software areas like embedded and industrial control might have different views on the top 25 and they’re very interested in how to include those.

They’re publishing a bunch of pocket guides to try to make the process accessible.  There’s a focus on supply risk chain management, including services.

Side note – don’t disable compiler warnings!  Even the compiler guys are working with the sec guys.  If you disable compiler warnings you’re on the “willful disregard” side of due diligence.

You need to provide security engineering and risk-based analysis throughout the lifecycle (plan, design, build, deploy) – that generates more resilient software products/systems.

  • Plan – risk assessment
  • Design – security design review
  • Build – app security testing
  • Deploy – SW support, scanning, remediation

They’re trying to incorporate software assurance programs into higher education.

Like Matt, he mentioned the Rugged Software Manifesto.  Hearing this both from “OWASP guy” and “Homeland security guy” convinced me it was something that bore looking into.  I like the focus on “rugged” – it’s more than just being secure, and “security” can seem like an ephemeral concept to untrained developers.  “Rugged” nicely encompasses reliable, secure, resilient…  I like it.

You can do software assurance self assessment they provide on their Web site to get started.

It was interesting, at times it seemed like Government Program Bureaucratese but then he’d pull out stuff like the CWE top 25 and the Rugged Software Manifesto – they really seem to be trying to leverage “real” efforts and help use the pull of Homeland Security’s Cyber Security Division to spread them more widely.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Why ha.ckers.org Doesn’t Get Hacked

Why ha.ckers.org Doesn’t Get Hacked

The first LASCON session I went to was Why ha.ckers.org Doesn’t Get Hacked by James Flom (who with rsnake is ha.ckers.org).  By its nature, it gets like 500-1000 hack attempts a week, but they’ve kept it secure for six years now.

From the network perspective, they use dual firewalls running the openBSD open source pf, which does Cisco-style traffic inspection.  Systems inside have no egress, and they have the user traffic and admin traffic segmented to different firewalls  sets and switches.

On the systems, they use chroot jails mounted read only.  Old school!  Jails are virtualization on the cheap, and if combined with a read only filesystem, give you a single out of band point of update, and you can do upgrades with minimal downtime.  They monitor them from the parent host.

Rsnake has done a whole separate presentation on how he’s secured his browser – the biggest attack vector is often “compromise the browser of an admin” and not direct attack on the asset.

They went to WordPress for their software – how to secure that?  Obviously code security’s a nightmare there.  So they set up a defense in depth scheme where they check the source ip, cert, and user/pass auth at the firewall, then to admin proxy check source IP, path, htaccess user/pass, and finally do the app auth.

Other stuff they do:

  • Secure logging to OSSEC – pflogd, waf logs, os logs, apache logs, parent logs, it goes off host so it’s reasonably tamper-proof
  • On-host WAF – custom, more of a “Web IDS” really, which feeds back “naughty people” to the firewall for blocking
  • For Apache – have your content owned by a different user, in their case there’s not even a user in the jail that can write to the files.
  • Use file ACLs, too.

Use case – they found an Apache flaw, reported it, and as is too often the case, Apache didn’t care.  So they modded their pf to detect the premise of the attack and block it (not just the specific attack).  (Heard of slowloris?)

Their ISP has been an issue – as they’ve moved their ISPs have shut them down out of cluelessness sometimes (Time Warner Business Class FTL).

They are moving to relayd for load balancing and SSL.  The PCI rule about “stay encrypted all the way to the box” is dumb, because it would prevent them from doing useful security inspection at that layer.

A good talk, though sadly a lot of the direct takeaways would mean “go to FreeBSD,” which I would rather not do.  But a lot of the concepts can port to other OSes and pure virtualization/cloud scenarios.  And note how joining network security, OS security, and appsec gets you way more leverage than having “separate layers” where each layer only worries about itself.n

And may I just say that I love how Apache can be run “read only” – sadly, most software, even other open source software like Tomcat, can’t be.  It all wants to write down into its config and its running directories itself, and it’s a horrible design practice and security risk.  If you’re writing software, remember that if it’s compromised and it can write to its own exes/config/etc. you’re owned.  Make your software run on a read only FS (with read/write in /tmp for stuff acceptable).  It’s the right thing to do.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Why Does Bad Software Happen To Good People?

Why does bad software happen to good people?

First up at LASCON was the keynote by Matt Tesauro from Praetorian (and OWASP Foundation board member), speaking on “Why does bad software happen to good people?”  The problem in short is:

  • Software is everywhere, in everything
  • Software has problems
  • Why do we have these problems, and why can’t we create a secure software] ecosystem?

The root causes boil down to:

  • People trust software a lot nowadays
  • Blame developers for problems
  • Security of software is hidden
  • Companies just CYA in their EULAs
  • Lack of market reward for secure software
  • First mover advantage, taking time on security often not done
  • Regulation can’t keep up

So the trick is to address visibility of application security, and in a manner that can take root despite the market pressures against it.  We have to break the “black box” cycle of trust and find ways to prevent problems rather than focusing on coping with the aftermath.

He made the point that the physical engineering disciplines figured out safety testing long ago, like the “slump test” for concrete.  We don’t have the equivalent kind of standards and pervasive testability for software safety.  How do we make software testable, inspectable, and transparent?

Efforts underway:

  • They got Craig Youngkins, a big python guy, to start Python Security.org, which has been successful as a developer-focused grass roots effort
  • The Rugged Software Manifesto at ruggedsoftware.org is similar to the Agile Manifesto and it advocates resilient (including secure) software at the ideological level.

I really liked this talk and a number of things resonated with me.  First of all, working for a test & measurement company that serves the “real engineering” disciplines, I often have noted that software engineering needs best practices taken from those disciplines.  If it happens for jumbo jets then it can happen for your shitty business application.  Don’t appeal to complexity as a reason software can’t be inspected.

Also, the Rugged Software Manifesto dovetails well with a lot of our internal discussion on reliability.  And having “rugged” combine reliability, security, and other related concepts and make it appealing to grass roots developers is great.  “Quality initiatives” suck.  A “rugged manifesto” might just work.  It’s how agile kicked “CMMI”‘s ass.

The points about how pervasive software are now are well taken, including the guy with the mechanical arms who died in a car crash – software fault?  We’ll never know.  As we get more and more information systems embedded with/in us we have the real possibility of a “Ghost In The Shell” kind of world, and software security isn’t just about your credit card going missing but about your very real physical safety.

He threw in some other interesting tidbits that I noted down to look up later, including the ToorCon “Real Men Carry Pink Pagers” presentation about hacking the Girl Tech IM-Me toy into a weaponized attack tool, and some open source animated movie called Sintel.

It was a great start to the conference, raised some good questions for thought and I got a lot out of it.

Leave a comment

Filed under Conferences, Security

LASCON 2010 Conference Report

LASCON 2010 was awesome.  It’s an Austin app security conference put on by the Austin OWASP chapter. Josh Sokol and James Wickett did a great job of putting the thing together; for a first time convention it was really well run and went very smoothly.  The place was just about full up, about 200 people.  I saw people I knew there from Austin Networking, the University of Texas, HomeAway, and more.  It was a great crowd, all sorts of really sharp people, both appsec pros and others.

And the swag was nice, got a good quality bugout bag and shirt, and the OWASP gear they were selling was high quality – no crappy black geek tshirts.

I wish I had more time to talk with the suppliers there; I did make a quick run in to talk to Fortify and Veracode.  Both now have SaaS offerings where you can buy in for upload scanning of your source (Fortify) or your binaries (Veracode) without having to spring for their big ass $100k software packages, which is great – if proper security is only the purview of billion dollar companies, then we’ll never be secure.

At the happy hour they brought in a mechanical bull!  We had some friends in from Cloudkick in SF and they asked me with some concern, “Do all conferences in Austin do this?”  Nope, first time I’ve seen it, but it was awesome!  After some of the free drinks, I was all about it.  They did something really clever with the drinks – two drink tickets free, but you could get more by going and talking to the vendors at their booths.  That’s a win-win!  No “fill out a grade school passport to get entered into a drawing” kind of crap.

Speaking of drawings, they had a lot of volunteers working hard to run the con, they did a great job.

I took notes from the presentations I went to, they’re coming as separate posts.  I detected a couple common threads I found very interesting.  The Rugged Software Manifesto was mentioned by speakers in multiple sessions including by the Department of Homeland Security.  It’s clear that as software becomes more and more pervasive in our lives that health, safety, national security, and corporate livelihood are all coming to depend on solid, secure software and frankly we’re not well on the right track towards that happening.

Also, the need for closer cooperation between developers, appsec people, and traditional netsec people was a clear call to action.  This makes me think about the ongoing call for developer/ops collaboration from DevOps – truly, it’s a symptom of a larger need to find a better way for everyone to work together to generate these lovely computerized monstrosities we work on.

So check out my notes from the sessions – believe me, if it was boring I wouldn’t bother to write it down.

I hear the conference turned a profit and it was a big success from my point of view, so here’s hoping it’s even bigger and better in 2011!  Two days!  It’s calling to you!

Leave a comment

Filed under Conferences, Security

A DevOps Manifesto

They were talking about a DevOps manifesto at DevOpsDays Hamburg and it got me to thinking, what’s wrong with the existing agile development manifesto?  Can’t we largely uptake that as a unifying guiding principle?

Go read the top level of the Agile Software Development Manifesto.  What does this look like, if you look at it from a systems point of view instead of a pure code developer point of view?  An attempt:

DevOps Manifesto

We are uncovering better ways of running
systems by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes over tools
Working systems over comprehensive documentation
Customer and developer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more.

That’s not very different. It seems to me that some of the clauses we can accept without reservation.  As a systems guy I do get nervous about individuals and interactions over processes and tools (does that promote the cowboy mentality?) – but it’s not to say that automation and tooling is bad, in fact it’s necessary (look at agile software development practices, there’s clearly processes and tools) but that the people involved should always be the primary concern.  IMO this top level agile call to arms has nothing to do with dev or ops or biz, it’s a general template for collaboration.  And how “DevOps” is it to have a different rallying point for ops vs dev?  Hint: not very.

Then you have the Twelve Principles of Agile Software. Still very high level, but here’s where I think we start having a lot of more unique concerns outside the existing list.  Let me take a shot:

Principles Behind the DevOps Manifesto

We follow these principles:

Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable functionality. (more general than “software”.)

Software functionality can only be realized by the
customer when it is delivered to them by sound systems.
Nonfunctional requirements are as important as
desired functionality to the user’s outcome. (New: why systems are important.)

Infrastructure is code, and should be developed
and managed as such. (New.)

Welcome changing requirements, even late in
development. Agile processes harness change for
the customer’s competitive advantage. (Identical.)

Deliver working functionality frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale. (software->functionality)

Business people, operations, and developers must work
together daily throughout the project. (Add operations.)

Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done. (Identical.)

The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation. (Identical.)

Working software successfully delivered by sound systems
is the primary measure of progress. (Add systems.)

Agile processes promote sustainable development.
The sponsors, developers, operations, and users should be able
to maintain a constant pace indefinitely.  (Add operations.)

Continuous attention to technical excellence
and good design enhances agility. (Identical.)

Simplicity–the art of maximizing the amount
of work not done–is essential. (Identical – KISS principle.)

The best architectures, requirements, and designs
emerge from self-organizing teams. (Identical.)

At regular intervals, the team reflects on how
to become more effective, then tunes and adjusts
its behavior accordingly. (Identical.)

That’s a minimalist set.

Does this sound like it’s putting software first still?  Yes.  That’s desirable.  Systems are there to convey software functionality to a user, they have no value in and of themselves.  I say this as a systems guy.  However, I did change “software” to “functionality” in several places – using systems and stock software (open source, COTS, etc.) you can deliver valuable functionality to a user without your team writing lines of code.

Anyway, I like the notional approach of inserting ourselves into the existing agile process as opposed to some random external “devops manifesto” that ends up begging a lot of the questions the original agile manifesto answers already (like “What about the business?”).  I think one of the main points of DevOps is simply that “Hey, the concepts behind agile are sound, but y’all forgot to include us Ops folks in the collaboration – understandable because 10 years ago most apps were desktop software, but now that many (most?) are services, we’re an integral part of creating and delivering the app.”

Thoughts?

20 Comments

Filed under DevOps

Do You Need Ops?

Ok, so this is old but I hadn’t read it before.  InfoQ hosted  What is the Role of an Operations Team in Software Development Today? The premise: You don’t need ops any more!  DevOps means your developers can do ops.  Ta da.

Well, besides separation of duties problems, this has a number of fundamental flaws with it.  The first is sheer amount of knowledge required and work to do.  One of the greatest difficulties in hiring Web Ops people is getting the wide generalist/specialist skill set that they need – the whole first chapter of Web Operations is Theo Schlossnagle talking about that.  The more skill sets you pack into one person, the less good they are at them.  A good developer needs a huge skill set, so does a good operations person.  If you add “app server administration” to a dev, they’re going to have to “forget” Spring or something to make room, in a virtual sense.  Sure, you can take a developer and teach them ops, it’s not totally foreign – but that’s because all these people come out of the same CS/MIS programs in the first place, duh. You can take a Flash developer and teach them embedded development too, but I think everyone understands what a fundamental retooling that is.

So is this just an idea from someone who has no idea what all Operations folks do?  Maybe.  I know I had one discussion inside our IT department with a development architect who, bridling at our concerns with a portal project, said “What do you people do anyway?  Why do we need your team?  You just move files around all day!”  It’s the classic “I don’t know what all that job entails, so it must be easy” syndrome. But our systems team has a huge amount of institutional knowledge around APM, security, management, etc – heck, we try to spread it into the dev teams as much as we can, but there’s a lot.  It’s similar to QA – sure, “developers can do their own testing” – but doing good load testing etc. is a large field of endeavor unto itself.  If all testing is left to devs, you don’t get good testing.  Doesn’t mean devs shouldn’t test, or write unit tests – they are a necessary but not sufficient part of the testing equation.

But you know, I would argue that from a certain point of view, maybe this is right.

Infrastructure = code, right?  And if you are far down the path of automation and system modeling, then you redefine Ops as just a branch of development.  One guy on the team knows SQL, another knows .NET, and another knows Apache config and Amazon AMI.  One tester knows how to do functional regression tests, another knows how to do load tests, and another knows how to do performance, security, and reliability testing.  Sure, from a certain point of view these are simply all different technical skills in one big bag of skills, so a systems engineer who configures WebLogic is just a developer that knows WebLogic as one of their tools.  And I think there’s a lot of truth to this really, and part of our DevOps implementation here has focused on mainstreaming ops into our same agile tracking tool, bug tracking, processes, etc. as our developers.

However, this misses another huge part of the equation – mixing reactive support and proactive work is a time-killer that causes context switching and thrash that degrades efficiency.

Even in our Web admin team, we separated out into a “systems engineering” group and a “production support” group.  The former worked on projects with developers writing new code, and the latter handled pages, requests, etc. around a running system.  It’s because the interrupt driven work from operations absolutely killed people’s ability to execute on projects.  There’s a great part in the O’Reilly book Time Management for System Administrators that prescribes swapping off ops/support with other admins to reduce that problem.

Many developers don’t understand a running system.  It’s been interesting being in R&D now at NI, where a lot of the development is desktop software driven – for a long time we ran these public facing Web demos where the R&D engineers would say, with a straight face, “You log into Windows, click to run this app, and then lock the screen.”  Even the idea of running as a service was weird hoodoo.

Anyway, in IT here the apps teams have split out as well!  There’s App Ops groups that offload CI and production issues from the “main” App Dev groups; then the systems engineers work more with the main app dev groups and the production support team works with the app ops groups.  And believe me, there’s enough work to go around.

Now, of course you need developers involved in the operational support of your apps.  That’s part of the value of DevOps – it’s not all “Ops needs to learn new stuff,” it’s also “Devs need to be involved in support.”  But in the end, those are huge areas where “do it all” is not meaningful.  Developers helping with production support is a necessary but not sufficient part of operations.

4 Comments

Filed under DevOps

Why SSL Sucks

How do you make your Web site secure?  “SSL!” is the immediate response.  It’s a shame that there are so many pain points surrounding using SSL.

  • It slows things down.  On ni.com we bought a hardware SSL accelerator, but in the cloud we can’t do that.
  • Cert management.  Everyone uses loads of VIPs, so you end up needing to get wildcard certs.  But then you have to share them – we’re a big company, and don’t want to give the main wildcard cert out to multiple teams.  And you can’t get extended validation (EV) on wildcard certs.
  • UI issues.  We just tried turning SSL on for our internal wiki.  But anytime there’s any non-https included element – warnings pop up.  If there’s a form field to go to search, which isn’t https, warnings pop up.  Hell, sometimes you follow a link to a non-https page, and warnings pop up.
  • CAs.  We have an internal NI CA and try to have some NI-signed certs, but of course you have to figure out how to get them into every browser in your enterprise.
  • It’s just retardedly complicated.  For putting a cert on Apache, it’s pretty well-worn, but recently I was trying to set up encrypted replication for mySQL and OpenDS and Jesus, doing anything other than the default self signed cert is hell on earth.  “Oh is that in the right format wallet?”

The result is that SSL’s suckiness ends up driving behavior that degrades security.  People know to just “accept the exception” any time they hit a site that complains about an invalid cert.  We have decided to remove SSL from most of our internal wiki and just leave it on the login page to avoid all the UI issues.  We couldn’t secure our replication from a combination of bugs (OpenDS secure replication works, until you restart any server that is – then it’s broken permanently) and the hassle.

In general, there has been little to no usability work put into the cert/encryption area, and that is why still so few people use it.  PGPing your email is only for gearheads. Hell, you have to transform key formats to use PuTTY to log into Amazon servers using SSL.  Stop the madness.

If the world really gave a crap about encryption, then e.g. your public key could be attached to your Facebook profile and people’s mail readers could pull that in automatically to validate signatures, for instance. “Key exchange” isn’t harder than the other kinds of more-convenient information exchange that happen all the time on the Net.   And you could take a standard cert in whatever format your CA gives it to you and feed it into any software in an easy and standard way and have it start encrypting its communication.

Me to world – make it happen!

7 Comments

Filed under Security

What’s a “DevOp?”

I ran across an interesting post by Dmitriy Samovskiy about the difference between a DevOp and a Sysadmin and it raised up some thoughts I’ve had about the classification of different kinds of sysadmin types and the confusion that the “Ops” part of “DevOps” sometimes causes.

I think that using “DevOp” as a job or role name isn’t a good idea, and that really what the term indicates is that there are two major classes of technical role,

  • Devs – people who work with code mostly
  • Ops – people who work with systems mostly

You could say a “DevOp” is someone who does some of both, but I think the preferred usage is that DevOps, like agile, is about a methodology of collaboration.  A given person having both skill sets of fulfilling both roles doesn’t require a special term, it’s like anyone else in IT with multiple hats.

Of course, inside each of these two areas is a wide variety of skills and specialized roles.  Many of the people talking about “DevOps” are in five-person Web shops, in which case “Ops” is an adequate descriptor of “all the infrastructure crap guy #5 does.”

But in larger shops, you start to realize how many different roles there are.  In the dev world, you get specialization, from UI developers to service developers to embedded developers to algorithm developers.  I’d tend to say that even Web design/development (HTML/CSS/JS) and QA are often considered part of the “dev side of the house.”  It’s the same in Ops.

Now, traditionally many “systems” teams, also known as “infrastructure” teams, have been divided up by technology silo only.  You have a list of teams of types UNIX, Windows, storage, network, database, security, etc.  This approach has its strengths but also has critical weaknesses – which is why ITIL, for example, has been urging people to reorganize around “services you are delivering” lines.

In the dev world, you don’t usually see tech silos like that.  “Here’s the C programmer department, the Java programmer department, the SQL programmer department…  Hand your specs to all those departments and hope you get a working app out of it!”  No, everyone knows intuitively that’s insane.  But largely we still do the same thing in traditional systems teams (“Ops” umbrella).

So historically, the first solution that emerged was a separate kind of group.  Here at NI, the first was a “Web Ops” group called the Web Admins, which was formed ten years ago when it became clear that running a successful Web site cannot be done by bringing together fractional effort from various tech silos.  The Web Admins work with the developers and the other systems teams – the systems teams do OS builds, networking, rack-and-jack, storage/data center, etc. and the Web Admins do the software (app servers, collab systems, search, content management, etc.), SaaS, load balancing, operational support, release management, etc.  Our Web Admin team ended up expanding very strongly into the application performance management and Web security areas because no one else was filling them.

In more dotcommey companies, you see the split between their “IT group” and their “Engineering” or “Operations” group that is “support for their products,” as two entirely different beasts.

Anyway, the success of this team spawned others, so now there are several teams we call “App Admins” here at NI, that perform this same role with respect to sitting between the developers and the “system admins.”  To make it more complicated, even some of the apps (“Dev”) teams are also spawning “App Ops” teams that handle CI work and production issue escalation, freeing up the core dev teams for more large-scale projects.  Our dev teams are organized around line of business (ecommerce, community, support, etc.) so they find that helpful. (I’ll note that the interface between line of business organization and technology silo organization is not an easy one.)

Which of these teams are the “DevOps?”  None of them.  Naturally, the teams that are more in the middle feel the need for it more, which is why I as a previous manager of the Web Admins am the primary evangelist for DevOps in our organization.  The “App Admins” and the new “App Ops” teams work a lot more closely together on “operational” issues.

But this is where the term “Ops” has bad connotations – in my mind, “operations”, as closely related to “support”, is about the recurring activities around the runtime operation of our systems and apps.  In fact, we split the Web Admin team into two sub-teams – an “operations” team handling requests, monitoring, releases, and other interrupt driven activity, and a “systems” team that does systems engineering.  The interface between systems engineering and core dev teams is just as important as the interface around runtime, even more so I would say, and is where a lot of the agile development/agile infrastructure methodology bears the most fruit.  Our system engineering team is involved in projects alongside the developers from their initiation, and influence the overall design of the app/system (side note, I wish there was a word that captured “both app and system” well; when you say system people sometimes take that to mean both and sometimes to just mean the infrastructure).  And *that’s* DevOps.

Heck, our DBA team is split up even more – at one point they had a “production support” team, a “release” team, an “architecture” team, and a “projects” team.

But even on the back end systems teams, there are those that have more of a culture of collaboration – “DevOps” you might call it – and they are more of a pleasure to interface with, and then there’s those who are not, who focus on process over people, you might say.  I am down with the “DevOps” term just because it has the branding buzz around it, but I think it really is just a sexier way to say “Agile systems administration.”

On a related note, I’ve started to see job postings go by for “DevOps Engineers” and other such.  I think that’s OK to some degree, because it does differentiate the likely kind of operating environment of those jobs from all the noise posted as “UNIX Engineer III”, but if you are using “DevOps” as a job description you need to be pretty clear in your posting what you mean in terms of exact skills because of this confusion.  Do you mean you just want a jack of all trades who can write Java/C# code as well as do your sysadmin work because you’re cheap?  Or do you want a sysadmin who can script and automate stuff? Or do you want someone who will be embedded on project teams and understand their business requirements and help them to accomplish them?  Those are all different things that have different skill sets behind them.

What do you think?  It seems to me we don’t really have a good understanding of the taxonomy of the different kinds of roles within Ops, and thus that confuses the discussion of DevOps significantly.  Is it a name for, or a description of, or a prescription for, some specific sub-team?  Which is it for – production support, systems engineering, does IT count or is it just “product” support orgs for SaaS?

5 Comments

Filed under DevOps

Innotech Austin Coming Up Oct 28!

The main Austin technology conference, Innotech, is October the 28th!  it’s a good opportunity to see who’s up to what in the Austin area.  And cloud is hot on the list of sessions.  We’re going, who else is?

Leave a comment

Filed under Conferences

Austin Cloud Computing Users Group Meeting Sep 21

The next meeting of Austin’s cloud computing trailblazers is next Tuesday, Sep. 21.  Event details and signup are here.  Some gentlement from Opscode will be talking about cloud security, and then we’ll have our usual unconference-style discussions.  If you haven’t, join the group mailing list!  It’s free, you get fed, and you get to talk with other people actually working with cloud technologies.

Leave a comment

Filed under Cloud