Tag Archives: Security

LASCON 2010: Why ha.ckers.org Doesn’t Get Hacked

Why ha.ckers.org Doesn’t Get Hacked

The first LASCON session I went to was Why ha.ckers.org Doesn’t Get Hacked by James Flom (who with rsnake is ha.ckers.org).  By its nature, it gets like 500-1000 hack attempts a week, but they’ve kept it secure for six years now.

From the network perspective, they use dual firewalls running the openBSD open source pf, which does Cisco-style traffic inspection.  Systems inside have no egress, and they have the user traffic and admin traffic segmented to different firewalls  sets and switches.

On the systems, they use chroot jails mounted read only.  Old school!  Jails are virtualization on the cheap, and if combined with a read only filesystem, give you a single out of band point of update, and you can do upgrades with minimal downtime.  They monitor them from the parent host.

Rsnake has done a whole separate presentation on how he’s secured his browser – the biggest attack vector is often “compromise the browser of an admin” and not direct attack on the asset.

They went to WordPress for their software – how to secure that?  Obviously code security’s a nightmare there.  So they set up a defense in depth scheme where they check the source ip, cert, and user/pass auth at the firewall, then to admin proxy check source IP, path, htaccess user/pass, and finally do the app auth.

Other stuff they do:

  • Secure logging to OSSEC – pflogd, waf logs, os logs, apache logs, parent logs, it goes off host so it’s reasonably tamper-proof
  • On-host WAF – custom, more of a “Web IDS” really, which feeds back “naughty people” to the firewall for blocking
  • For Apache – have your content owned by a different user, in their case there’s not even a user in the jail that can write to the files.
  • Use file ACLs, too.

Use case – they found an Apache flaw, reported it, and as is too often the case, Apache didn’t care.  So they modded their pf to detect the premise of the attack and block it (not just the specific attack).  (Heard of slowloris?)

Their ISP has been an issue – as they’ve moved their ISPs have shut them down out of cluelessness sometimes (Time Warner Business Class FTL).

They are moving to relayd for load balancing and SSL.  The PCI rule about “stay encrypted all the way to the box” is dumb, because it would prevent them from doing useful security inspection at that layer.

A good talk, though sadly a lot of the direct takeaways would mean “go to FreeBSD,” which I would rather not do.  But a lot of the concepts can port to other OSes and pure virtualization/cloud scenarios.  And note how joining network security, OS security, and appsec gets you way more leverage than having “separate layers” where each layer only worries about itself.n

And may I just say that I love how Apache can be run “read only” – sadly, most software, even other open source software like Tomcat, can’t be.  It all wants to write down into its config and its running directories itself, and it’s a horrible design practice and security risk.  If you’re writing software, remember that if it’s compromised and it can write to its own exes/config/etc. you’re owned.  Make your software run on a read only FS (with read/write in /tmp for stuff acceptable).  It’s the right thing to do.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Why Does Bad Software Happen To Good People?

Why does bad software happen to good people?

First up at LASCON was the keynote by Matt Tesauro from Praetorian (and OWASP Foundation board member), speaking on “Why does bad software happen to good people?”  The problem in short is:

  • Software is everywhere, in everything
  • Software has problems
  • Why do we have these problems, and why can’t we create a secure software] ecosystem?

The root causes boil down to:

  • People trust software a lot nowadays
  • Blame developers for problems
  • Security of software is hidden
  • Companies just CYA in their EULAs
  • Lack of market reward for secure software
  • First mover advantage, taking time on security often not done
  • Regulation can’t keep up

So the trick is to address visibility of application security, and in a manner that can take root despite the market pressures against it.  We have to break the “black box” cycle of trust and find ways to prevent problems rather than focusing on coping with the aftermath.

He made the point that the physical engineering disciplines figured out safety testing long ago, like the “slump test” for concrete.  We don’t have the equivalent kind of standards and pervasive testability for software safety.  How do we make software testable, inspectable, and transparent?

Efforts underway:

  • They got Craig Youngkins, a big python guy, to start Python Security.org, which has been successful as a developer-focused grass roots effort
  • The Rugged Software Manifesto at ruggedsoftware.org is similar to the Agile Manifesto and it advocates resilient (including secure) software at the ideological level.

I really liked this talk and a number of things resonated with me.  First of all, working for a test & measurement company that serves the “real engineering” disciplines, I often have noted that software engineering needs best practices taken from those disciplines.  If it happens for jumbo jets then it can happen for your shitty business application.  Don’t appeal to complexity as a reason software can’t be inspected.

Also, the Rugged Software Manifesto dovetails well with a lot of our internal discussion on reliability.  And having “rugged” combine reliability, security, and other related concepts and make it appealing to grass roots developers is great.  “Quality initiatives” suck.  A “rugged manifesto” might just work.  It’s how agile kicked “CMMI”‘s ass.

The points about how pervasive software are now are well taken, including the guy with the mechanical arms who died in a car crash – software fault?  We’ll never know.  As we get more and more information systems embedded with/in us we have the real possibility of a “Ghost In The Shell” kind of world, and software security isn’t just about your credit card going missing but about your very real physical safety.

He threw in some other interesting tidbits that I noted down to look up later, including the ToorCon “Real Men Carry Pink Pagers” presentation about hacking the Girl Tech IM-Me toy into a weaponized attack tool, and some open source animated movie called Sintel.

It was a great start to the conference, raised some good questions for thought and I got a lot out of it.

Leave a comment

Filed under Conferences, Security

LASCON 2010 Conference Report

LASCON 2010 was awesome.  It’s an Austin app security conference put on by the Austin OWASP chapter. Josh Sokol and James Wickett did a great job of putting the thing together; for a first time convention it was really well run and went very smoothly.  The place was just about full up, about 200 people.  I saw people I knew there from Austin Networking, the University of Texas, HomeAway, and more.  It was a great crowd, all sorts of really sharp people, both appsec pros and others.

And the swag was nice, got a good quality bugout bag and shirt, and the OWASP gear they were selling was high quality – no crappy black geek tshirts.

I wish I had more time to talk with the suppliers there; I did make a quick run in to talk to Fortify and Veracode.  Both now have SaaS offerings where you can buy in for upload scanning of your source (Fortify) or your binaries (Veracode) without having to spring for their big ass $100k software packages, which is great – if proper security is only the purview of billion dollar companies, then we’ll never be secure.

At the happy hour they brought in a mechanical bull!  We had some friends in from Cloudkick in SF and they asked me with some concern, “Do all conferences in Austin do this?”  Nope, first time I’ve seen it, but it was awesome!  After some of the free drinks, I was all about it.  They did something really clever with the drinks – two drink tickets free, but you could get more by going and talking to the vendors at their booths.  That’s a win-win!  No “fill out a grade school passport to get entered into a drawing” kind of crap.

Speaking of drawings, they had a lot of volunteers working hard to run the con, they did a great job.

I took notes from the presentations I went to, they’re coming as separate posts.  I detected a couple common threads I found very interesting.  The Rugged Software Manifesto was mentioned by speakers in multiple sessions including by the Department of Homeland Security.  It’s clear that as software becomes more and more pervasive in our lives that health, safety, national security, and corporate livelihood are all coming to depend on solid, secure software and frankly we’re not well on the right track towards that happening.

Also, the need for closer cooperation between developers, appsec people, and traditional netsec people was a clear call to action.  This makes me think about the ongoing call for developer/ops collaboration from DevOps – truly, it’s a symptom of a larger need to find a better way for everyone to work together to generate these lovely computerized monstrosities we work on.

So check out my notes from the sessions – believe me, if it was boring I wouldn’t bother to write it down.

I hear the conference turned a profit and it was a big success from my point of view, so here’s hoping it’s even bigger and better in 2011!  Two days!  It’s calling to you!

Leave a comment

Filed under Conferences, Security

Cloud Security: a chicken in every pot and a DMZ for every service

There are a couple military concepts that have bled into technology and in particular into IT security, a DMZ being one of them. A Demilitarized Zone (DMZ) is a concept where there is established control for what comes in and what goes out between two parties and in military terms you can establish a “line of control” by using a DMZ. In a human-based DMZ, controllers of the DMZ make ingress (incoming) and egress (outgoing) decisions based on an approved list–no one is allowed to pass unless they are on the list and have proper identification and approval.

In the technology world, the same thing is done based on traffic between computers. Decisions to allow or disallow the traffic can be made based on where the traffic came from (origination), where it is going (destination) or even dimensions of the traffic like size, length, or even time of day. The basic idea being that all traffic is analyzed and is either allowed or disallowed based on determined rules and just like in a military DMZ, there is a line of control where only approved traffic is allowed to ingress or egress. In many instances a DMZ will protect you from malicious activity like hackers and viruses but it also protects you from other configuration and developer errors and can guarantee that your production systems are not talking to test or development tiers.

Lets look at a basic web tiered architecture. In a corporation that hosts its own website they will more than likely have the following four components: incoming internet traffic, a web server, a database server and an internal network. To create a DMZ or multiple DMZ instances to handle their web traffic, they would want to make sure that someone from the internet could only talk to the web server, they would also want to verify that only the web server can only talk to the database server and they would want to make sure that their internal network is inaccessible by the the web server, database server and the internet traffic.

Using firewalls, you would need to set up at least the below three firewalls to adequately control the DMZ instances:

1. A firewall between the external internet and the web server
2. A firewall in front of the internal network
3. A firewall between your web servers and database server

Of course these firewalls need to be set so that they allow (or disallow) only certain types of traffic. Only traffic that meets certain rules based on its origination, destination, and its dimensions will be allowed.

Sounds great, right? The problem is that firewalls have become quite complicated and now sometimes aren’t even advertised as firewalls, but instead are billed as a network-device-that-does-anything-you-want-that-can-also-be-a-firewall-too. This is due in part to the hardware getting faster, IT budgets shrinking and scope creep. The “firewall” now handles VPN, traffic acceleration, IDS duties, deep packet inspection and making sure your employees aren’t watching YouTube videos when they should be working. All of those are great, but it causes firewalls to be expensive, fragile and difficult to configure.  And to have the firewall watching all ingress and egress points across your network, you usually have to buy several devices to scatter throughout your topology.

Additionally, another recurring problem is that most firewall analysis and implementation is done with an intent to secure the perimeter. Which make sense, but it often stops there and doesn’t protect the interior parts of your network. Most IT security firms that do consulting and penetration tests don’t generally come through the “front door” by that I mean, they don’t generally try to get in through the front-facing web servers but instead they will go through other channels such as dial-in, wireless, partner, third-party services, social engineering, FTP servers or that demo system that was setup back 5 years ago that no hasn’t been taken down–you know the one I am talking about. Once inside, if there are no well-defined DMZs, then it is pretty much game-over because at that point there are no additional security controls. A DMZ will not fix all your problems, but it will provide an extra layer of protection that could protect you from malicious activity. And like I mentioned earlier it can also help prevent configuration errors crossing from dev to test to prod.

In short, a DMZ is a really good idea and should be implemented for every system that you have. The most optimal DMZ would be a firewall in front of each service that applies rules to determine what traffic is allowed in and out. This, however, is expensive to set up and very rarely gets implemented. That was the old days, this is now and good news, the cloud has an answer.

I am most familiar with Amazon Web Services so we will include an example of how do this with security groups from an AWS perspective. The following code creates a web server group and a database server group and allows the web server to talk to the database on port 3306 only.

ec2-add-group web-server-sec-group -d "this is the group for web servers" #This creates the web server group with a description
ec2-add-group db-server-sec-group -d "this is the group for db server" #This creates the db server group with a description
ec2-authorize web-server-sec-group -P tcp -p 80 -s 0.0.0.0/0 #This allows external internet users to talk to the web servers on port 80 only
ec2-authorize db-server-sec-group --source-group web-server-sec-group --source-group-user AWS_ACCOUNT_NUMBER -P tcp -p 3306 #This allows only traffic from the web server group on port 3306 (mysql) to ingress

Under the above example the database server is in a DMZ and only traffic from the web servers are allowed to ingress into it. Additionally, the web server is in a DMZ in that it is protected from the internet on all other ports except for port 80. If you were to implement this for every role in your system, you would be in effect implementing a DMZ between each layer and would provide excellent security protection.

The cloud seems to get a bad rap in terms of security. But I would counter that in some ways the cloud is more secure since it lets you actually implement a DMZ for each service. Sure, they won’t do deep packet analysis or replace an intrusion detection system, but they will allow you to specifically define what ingresses and egresses are allowed on each instance.  We may not ever get a chicken in every pot, but with the cloud you can now put a DMZ on every service.

Leave a comment

Filed under Cloud, Security

Austin Cloud Computing Users Group Meeting Sep 21

The next meeting of Austin’s cloud computing trailblazers is next Tuesday, Sep. 21.  Event details and signup are here.  Some gentlement from Opscode will be talking about cloud security, and then we’ll have our usual unconference-style discussions.  If you haven’t, join the group mailing list!  It’s free, you get fed, and you get to talk with other people actually working with cloud technologies.

Leave a comment

Filed under Cloud

Application Security Conference in Austin, TX

I thought I would take this opportunity to invite the agile admin readers to LASCON.   LASCON (Lonestar Application Security Conference) is happening in Austin, TX on October 29th, 2010. The conference is sponsored by OWASP (the Open Web App Security Project) and is an entire day of quality content on web app security.  We’ll be there!

The speaker list is still in the works, but so far we have two presentations from this years BlackHat conference, several published authors, and the Director for Software Assurance in the National Cyber Security Division of the Department of Homeland Security just to name a few, and that’s only the preliminary round of acceptances.

Do you remember a few years ago when there was a worm going around MySpace that infected user profile pages at the rate of over one million in 20 hours?  Yeah, the author of that worm is speaking at the conference.  How can you beat that?

I have been planning this conference for a few months and am pretty excited about it.  If you are can make it to Austin on October 29th, we would love to meet you at LASCON.

1 Comment

Filed under Conferences, Security

DevOps and Security

I remember some complaints about DevOps from a couple folks (most notably Rational Survivability) saying “what about security!  And networking!  They’re excluded from DevOps!”  Well, I think that in the agile collaboration world, people are only excluded to the extent that they refuse to work with the agile paradigm.  Ops used to be “excluded” from agile, not because the devs hated them, but because the ops folks themselves didn’t willingly go collaborate with the devs and understand their process and work in that way.  As an ops person, it was hard to go through the process of letting go of my niche of expertise and my comfortable waterfall process, but once I got closer to the devs, understood what they did, and refactored my work to happen in an agile manner, I was as welcome as anyone to the collaborative party, and voila – DevOps.

Frankly, the security and network arenas are less incorporated into the agile team because they don’t understand how to be (or in many cases, don’t want to be).  I’ve done security work and work with a lot of InfoSec folks – we host the Austin OWASP chapter here at NI – and the average security person’s approach embodies most of what agile was created to remove from the development process.  As with any technical niche there’s a lot of elitism and authoritarianism that doesn’t mesh well with agile.

But this week, I saw a great presentation at the Austin OWASP chapter by Andre Gironda (aka “dre”) called Application Assessments Reloaded that covered a lot of ground, but part of it was the first coherent statement I’ve seen about what agile security would look like.  I especially like his term for the security person on the agile team – the “Security Buddy!”  Who can not like their security buddy?  They can hate the hell out of their “InfoSec Compliance Officer,” though.

Anyway, he has a bunch of controversial thoughts (he’s known for that) but the real breakthroughs are acknowledging the agile process, embedding a security “buddy” on the team, and leveraging existing unit test frameworks and QA behavior to perform security testing as well.  I think it’s a great presentation, go check it out!

1 Comment

Filed under DevOps, Security

Velocity 2010: Cloud Security: It Ain’t All Fluffy and Blue Sky Out There!

Cloud security, bugbear of the masses.  For my last workshop of Velocity Day 1 I went to a talk on that topic.  I read some good stuff on it in Cloud Application Architectures on the plane in and could stand some more.  I “minor” in security, being involved in OWASP and all, and if there’s one area full of more FUD right now than cloud computing, it is cloud security.  Let’s see if they can dispel confusion!  (I hope it’s not a fluffy presentation that’s nothing but cloud pictures and puns; so many of these devolve into that.)

Anyway, Ward Spangenberg us Directory of Security operations for Zynga Game Networks, which does Farmville and Mafia Wars.  He gets to handle things like death threats.  He is a founding member of the Cloud Security Alliance ™.

Gratuitous Definition of Cloud Computing time!  If you don’t know it, then you don’t need to worry about it, and should not be reading this right now.

Cloud security is “a nightmare,” says a Cisco guy who wants to sell you network gear.  Why?  Well, it’s so complicated.  Security, performance, and availability are the top 3 rated challenges (read: fears) about the cloud model.

In general the main security fuss is because it’s something new.  Whenever there is anything new and uncharted all the risk averse types flip out.

With the lower level stuff (like IaaS), you can build in security, but with SaaS you have to “RFP” it in because you don’t have direct control.

Top threats to cloud computing:

  • Abuse/nefarious use
  • Insecure APIs
  • And more but the slide is gone.  We’ll go over it later, I hope.  Oh, here’s the list.

Multitenancy

The “process next door” may be acting badly, and with IPs being passed around and reused you can get blacklisted ones or get DoSsed from traffic headed to one.  No one likes to share.  You could get germs.  Anyway, they have to manage 13,000 IPs and whitelisting them is arduous.

Not Hosted Here Syndrome

You don’t have insight into locations and other “data center level” stuff.  Even if they have something good, like a SAS 70 certification, you still don’t have insight into who exactly is touching your stuff.  Azure is nice, but have you tried to get your logs?  You can’t see them.  Sad.

Management tools and development frameworks don’t have all the security features they should.  Toolsets are immature and stuff like forensics are nonexistent.  And PaaS environments that don’t upgrade quickly end up being a large attack surface for “known vulnerabilities.”  You can reprovision “quickly” but it’s not instantaneous.

DoS

Stuff like DDoS and botnets are classic abuse.  He says there’s “always something behind it” – people don’t just DoS you for no profit!  And only IaaS and PaaS should be concerned about it!  I think that’s quite an overstatement, especially for those of us who don’t run 13,000 servers – people do DoS for kicks and for someone with 100 or fewer servers, they can be effective at it.

Note “Clobbering the Cloud” from DefCon 17.

Insecure Coding

XSS, injection, CSRF, all the usual… Use the tools.  Validate input.  Review code.  And insecure crypto, because doing real crypto is hard.

Malicious insiders/Pissy outsiders

Devs, consultants, and the cloud company.  You need redundant checks.  Need transparent review.

Shared Technology Issues

With a virtualized level, you can always potentially attack through it.  Check out Cloudburst and Red Pill/Blue Pill.

Data Loss and Leakage

Can happen.  Do what you would normally do to control it.  Encrypt some stuff.

Account or Service Hijacking

Users aren’t getting brighter.  Phishing etc. works great.  There’s companies like Damballa that work against this.  Malware is very smart in lots of cases, using metrics, self-improving.

Public deployment security impacts

Advantages – anonymizing effect, large security investments, pre-certification, multisite redundancy, fault tolerance.

Disadvantages – collateral damage, data & AAA security requirements, regulatory, multi-jurisdictional data stores, known vulnerabilities are global.

Going hybrid public/private helps some but increases complexity and adds data and credential exchange issues.

IaaS issues

Advantages: Control of encryption, minimized privileged user attacks, familiar AAA mechanisms, standardized and cross-vendor deployment, full control at VM level.

Disadvantages: Account hijacking, credential management, API security risks, lack of role based auth, full responsibility for ops, and dependence on the security of the virtualization layer.

PaaS Issues

Advantages: Less operational responsibility, multi-site business continuity, massive scale and resiliency, simpler compliance analysis, framework security features.

Disadvantages: Less operational control, vendor lockin, lack of security tools, increased likelihood of privileged user attack, cloud provider viability.

SaaS Issues

Advantages: Clearly defined access controls, vendor’s responsible for data center and app security, predictable scope of account compromise, integrationwith directory services, simplified user ACD.

Disadvantages: Inflexible reporting and features, lack of version control, inability to layer security controls, increased vulnerability to privileged user attacks, no control over legal discovery.

Q&A

If  you are using something like Flash that goes in the client, how do you protect your IP?  You don’t.  Can’t.  It’ll get reverse engineered.  You can do some mitigations.  Try to detect it.  Sic lawyers on them.  Fingerprint code.

Yes, he plays all their games.

In the end, it’s about risk management.  You can encrypt all the data you put in the cloud, but what if they compromise your boxes you do the encryption on,  or what if they try to crack your encryption with a whole wad of cloud boxes?  Yep.  It brings the real nature of security into clearer relief – it’s a continuum of stopping attacks by goons and being vulnerable to attacks by Chinese government and organized crime funded ninja Illuminati.

Can you make a cloud PCI compliant?  Sure.  Especially if you know how to “work” your QSA, because in the end there’s a lot of judgment calls in the audit process.  Lots of encryption even on top of SSL; public key crypt it from browser up using JS or something, then recrypt with an internal only key.  Use your payment provider’s facilities for hashing or 30-day authorizations and re-auth.  Throw the card number away ASAP and you’re good!  Protecting your keys is the main problem in the all-public cloud.  (Could you ssh-agent it, inject it right into memory of the cloud boxes from on premise?)

Private cloud vs public cloud?  Well, with private you own the infrastructure.

This session was OK; I suspect most Velocity people expect something a little more technical.  There weren’t a lot of takeaways for an ops person – it was more of an ISSA or OWASP “technology decisionmaker”  focused presentation.  If he had just put in a couple hardcore techie things it would have helped.  As it was, it was a long list of security threats that are all existing system security threats too.  How’s this different?  What are some specific mitigations; many of these were offered as “be careful!”  Towards the end with the specific IaaS/PaaS/SaaS implications it got better though.

8 Comments

Filed under Cloud, Conferences, DevOps, Security

DNS Rebinding

Recently I was able to give a talk at Austin OWASP about DNS Rebinding.  I will be uploading slides and example code on this blog soon, but first an overview of the topic.

The most important portion of this topic is the same origin policy of the browsers.  It disallows a user from visiting a site, and then executing JavaScript against their local network.  Or, at least that is the idea.

In computing, the same origin policy is an important security concept for a number of browser-side programming languages, such as JavaScript. The policy permits scripts running on pages originating from the same site to access each other’s methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites.

This mechanism bears a particular significance for modern web applications that extensively depend on HTTP cookies to maintain authenticated user sessions, as servers act based on the HTTP cookie information to reveal sensitive information or take state-changing actions. A strict separation between content provided by unrelated sites must be maintained on client side to prevent the loss of data confidentiality or integrity.  Excerpt from Wikipedia

DNS Rebinding overrides same origin policy so that the client believes it is talking to the same host when it really isn’t.  The browser accesses sortabadsite.com and at first is getting legitimate responses from it.  Shortly after the first requests (initial page load) are made, all communication is dropped and the browser will make a call back to DNS.  At this point the IP address for the domain is swapped (maybe with 127.0.0.1) and the client is now running XHR (XML HTTP Requests) against the localhost. There are some interesting vectors that this can go and will be explored in future posts.

Check back at this blog for a video demo, slide deck and future plans for new code.  Right now I am working on writing a DNS Rebinder application in Ruby that includes DNS, a firewall and a web server (or hooks into them).  If you are interested, let me know.  Gmail:  wickett

I would be amiss if I didnt mention RSnake’s work on DNS Rebinding over ha.ckers.org.  Check it out!

Leave a comment

Filed under Security