Category Archives: Security

LASCON 2010: Why Does Bad Software Happen To Good People?

Why does bad software happen to good people?

First up at LASCON was the keynote by Matt Tesauro from Praetorian (and OWASP Foundation board member), speaking on “Why does bad software happen to good people?”  The problem in short is:

  • Software is everywhere, in everything
  • Software has problems
  • Why do we have these problems, and why can’t we create a secure software] ecosystem?

The root causes boil down to:

  • People trust software a lot nowadays
  • Blame developers for problems
  • Security of software is hidden
  • Companies just CYA in their EULAs
  • Lack of market reward for secure software
  • First mover advantage, taking time on security often not done
  • Regulation can’t keep up

So the trick is to address visibility of application security, and in a manner that can take root despite the market pressures against it.  We have to break the “black box” cycle of trust and find ways to prevent problems rather than focusing on coping with the aftermath.

He made the point that the physical engineering disciplines figured out safety testing long ago, like the “slump test” for concrete.  We don’t have the equivalent kind of standards and pervasive testability for software safety.  How do we make software testable, inspectable, and transparent?

Efforts underway:

  • They got Craig Youngkins, a big python guy, to start Python Security.org, which has been successful as a developer-focused grass roots effort
  • The Rugged Software Manifesto at ruggedsoftware.org is similar to the Agile Manifesto and it advocates resilient (including secure) software at the ideological level.

I really liked this talk and a number of things resonated with me.  First of all, working for a test & measurement company that serves the “real engineering” disciplines, I often have noted that software engineering needs best practices taken from those disciplines.  If it happens for jumbo jets then it can happen for your shitty business application.  Don’t appeal to complexity as a reason software can’t be inspected.

Also, the Rugged Software Manifesto dovetails well with a lot of our internal discussion on reliability.  And having “rugged” combine reliability, security, and other related concepts and make it appealing to grass roots developers is great.  “Quality initiatives” suck.  A “rugged manifesto” might just work.  It’s how agile kicked “CMMI”‘s ass.

The points about how pervasive software are now are well taken, including the guy with the mechanical arms who died in a car crash – software fault?  We’ll never know.  As we get more and more information systems embedded with/in us we have the real possibility of a “Ghost In The Shell” kind of world, and software security isn’t just about your credit card going missing but about your very real physical safety.

He threw in some other interesting tidbits that I noted down to look up later, including the ToorCon “Real Men Carry Pink Pagers” presentation about hacking the Girl Tech IM-Me toy into a weaponized attack tool, and some open source animated movie called Sintel.

It was a great start to the conference, raised some good questions for thought and I got a lot out of it.

Leave a comment

Filed under Conferences, Security

LASCON 2010 Conference Report

LASCON 2010 was awesome.  It’s an Austin app security conference put on by the Austin OWASP chapter. Josh Sokol and James Wickett did a great job of putting the thing together; for a first time convention it was really well run and went very smoothly.  The place was just about full up, about 200 people.  I saw people I knew there from Austin Networking, the University of Texas, HomeAway, and more.  It was a great crowd, all sorts of really sharp people, both appsec pros and others.

And the swag was nice, got a good quality bugout bag and shirt, and the OWASP gear they were selling was high quality – no crappy black geek tshirts.

I wish I had more time to talk with the suppliers there; I did make a quick run in to talk to Fortify and Veracode.  Both now have SaaS offerings where you can buy in for upload scanning of your source (Fortify) or your binaries (Veracode) without having to spring for their big ass $100k software packages, which is great – if proper security is only the purview of billion dollar companies, then we’ll never be secure.

At the happy hour they brought in a mechanical bull!  We had some friends in from Cloudkick in SF and they asked me with some concern, “Do all conferences in Austin do this?”  Nope, first time I’ve seen it, but it was awesome!  After some of the free drinks, I was all about it.  They did something really clever with the drinks – two drink tickets free, but you could get more by going and talking to the vendors at their booths.  That’s a win-win!  No “fill out a grade school passport to get entered into a drawing” kind of crap.

Speaking of drawings, they had a lot of volunteers working hard to run the con, they did a great job.

I took notes from the presentations I went to, they’re coming as separate posts.  I detected a couple common threads I found very interesting.  The Rugged Software Manifesto was mentioned by speakers in multiple sessions including by the Department of Homeland Security.  It’s clear that as software becomes more and more pervasive in our lives that health, safety, national security, and corporate livelihood are all coming to depend on solid, secure software and frankly we’re not well on the right track towards that happening.

Also, the need for closer cooperation between developers, appsec people, and traditional netsec people was a clear call to action.  This makes me think about the ongoing call for developer/ops collaboration from DevOps – truly, it’s a symptom of a larger need to find a better way for everyone to work together to generate these lovely computerized monstrosities we work on.

So check out my notes from the sessions – believe me, if it was boring I wouldn’t bother to write it down.

I hear the conference turned a profit and it was a big success from my point of view, so here’s hoping it’s even bigger and better in 2011!  Two days!  It’s calling to you!

Leave a comment

Filed under Conferences, Security

Cloud Security: a chicken in every pot and a DMZ for every service

There are a couple military concepts that have bled into technology and in particular into IT security, a DMZ being one of them. A Demilitarized Zone (DMZ) is a concept where there is established control for what comes in and what goes out between two parties and in military terms you can establish a “line of control” by using a DMZ. In a human-based DMZ, controllers of the DMZ make ingress (incoming) and egress (outgoing) decisions based on an approved list–no one is allowed to pass unless they are on the list and have proper identification and approval.

In the technology world, the same thing is done based on traffic between computers. Decisions to allow or disallow the traffic can be made based on where the traffic came from (origination), where it is going (destination) or even dimensions of the traffic like size, length, or even time of day. The basic idea being that all traffic is analyzed and is either allowed or disallowed based on determined rules and just like in a military DMZ, there is a line of control where only approved traffic is allowed to ingress or egress. In many instances a DMZ will protect you from malicious activity like hackers and viruses but it also protects you from other configuration and developer errors and can guarantee that your production systems are not talking to test or development tiers.

Lets look at a basic web tiered architecture. In a corporation that hosts its own website they will more than likely have the following four components: incoming internet traffic, a web server, a database server and an internal network. To create a DMZ or multiple DMZ instances to handle their web traffic, they would want to make sure that someone from the internet could only talk to the web server, they would also want to verify that only the web server can only talk to the database server and they would want to make sure that their internal network is inaccessible by the the web server, database server and the internet traffic.

Using firewalls, you would need to set up at least the below three firewalls to adequately control the DMZ instances:

1. A firewall between the external internet and the web server
2. A firewall in front of the internal network
3. A firewall between your web servers and database server

Of course these firewalls need to be set so that they allow (or disallow) only certain types of traffic. Only traffic that meets certain rules based on its origination, destination, and its dimensions will be allowed.

Sounds great, right? The problem is that firewalls have become quite complicated and now sometimes aren’t even advertised as firewalls, but instead are billed as a network-device-that-does-anything-you-want-that-can-also-be-a-firewall-too. This is due in part to the hardware getting faster, IT budgets shrinking and scope creep. The “firewall” now handles VPN, traffic acceleration, IDS duties, deep packet inspection and making sure your employees aren’t watching YouTube videos when they should be working. All of those are great, but it causes firewalls to be expensive, fragile and difficult to configure.  And to have the firewall watching all ingress and egress points across your network, you usually have to buy several devices to scatter throughout your topology.

Additionally, another recurring problem is that most firewall analysis and implementation is done with an intent to secure the perimeter. Which make sense, but it often stops there and doesn’t protect the interior parts of your network. Most IT security firms that do consulting and penetration tests don’t generally come through the “front door” by that I mean, they don’t generally try to get in through the front-facing web servers but instead they will go through other channels such as dial-in, wireless, partner, third-party services, social engineering, FTP servers or that demo system that was setup back 5 years ago that no hasn’t been taken down–you know the one I am talking about. Once inside, if there are no well-defined DMZs, then it is pretty much game-over because at that point there are no additional security controls. A DMZ will not fix all your problems, but it will provide an extra layer of protection that could protect you from malicious activity. And like I mentioned earlier it can also help prevent configuration errors crossing from dev to test to prod.

In short, a DMZ is a really good idea and should be implemented for every system that you have. The most optimal DMZ would be a firewall in front of each service that applies rules to determine what traffic is allowed in and out. This, however, is expensive to set up and very rarely gets implemented. That was the old days, this is now and good news, the cloud has an answer.

I am most familiar with Amazon Web Services so we will include an example of how do this with security groups from an AWS perspective. The following code creates a web server group and a database server group and allows the web server to talk to the database on port 3306 only.

ec2-add-group web-server-sec-group -d "this is the group for web servers" #This creates the web server group with a description
ec2-add-group db-server-sec-group -d "this is the group for db server" #This creates the db server group with a description
ec2-authorize web-server-sec-group -P tcp -p 80 -s 0.0.0.0/0 #This allows external internet users to talk to the web servers on port 80 only
ec2-authorize db-server-sec-group --source-group web-server-sec-group --source-group-user AWS_ACCOUNT_NUMBER -P tcp -p 3306 #This allows only traffic from the web server group on port 3306 (mysql) to ingress

Under the above example the database server is in a DMZ and only traffic from the web servers are allowed to ingress into it. Additionally, the web server is in a DMZ in that it is protected from the internet on all other ports except for port 80. If you were to implement this for every role in your system, you would be in effect implementing a DMZ between each layer and would provide excellent security protection.

The cloud seems to get a bad rap in terms of security. But I would counter that in some ways the cloud is more secure since it lets you actually implement a DMZ for each service. Sure, they won’t do deep packet analysis or replace an intrusion detection system, but they will allow you to specifically define what ingresses and egresses are allowed on each instance.  We may not ever get a chicken in every pot, but with the cloud you can now put a DMZ on every service.

Leave a comment

Filed under Cloud, Security

Why SSL Sucks

How do you make your Web site secure?  “SSL!” is the immediate response.  It’s a shame that there are so many pain points surrounding using SSL.

  • It slows things down.  On ni.com we bought a hardware SSL accelerator, but in the cloud we can’t do that.
  • Cert management.  Everyone uses loads of VIPs, so you end up needing to get wildcard certs.  But then you have to share them – we’re a big company, and don’t want to give the main wildcard cert out to multiple teams.  And you can’t get extended validation (EV) on wildcard certs.
  • UI issues.  We just tried turning SSL on for our internal wiki.  But anytime there’s any non-https included element – warnings pop up.  If there’s a form field to go to search, which isn’t https, warnings pop up.  Hell, sometimes you follow a link to a non-https page, and warnings pop up.
  • CAs.  We have an internal NI CA and try to have some NI-signed certs, but of course you have to figure out how to get them into every browser in your enterprise.
  • It’s just retardedly complicated.  For putting a cert on Apache, it’s pretty well-worn, but recently I was trying to set up encrypted replication for mySQL and OpenDS and Jesus, doing anything other than the default self signed cert is hell on earth.  “Oh is that in the right format wallet?”

The result is that SSL’s suckiness ends up driving behavior that degrades security.  People know to just “accept the exception” any time they hit a site that complains about an invalid cert.  We have decided to remove SSL from most of our internal wiki and just leave it on the login page to avoid all the UI issues.  We couldn’t secure our replication from a combination of bugs (OpenDS secure replication works, until you restart any server that is – then it’s broken permanently) and the hassle.

In general, there has been little to no usability work put into the cert/encryption area, and that is why still so few people use it.  PGPing your email is only for gearheads. Hell, you have to transform key formats to use PuTTY to log into Amazon servers using SSL.  Stop the madness.

If the world really gave a crap about encryption, then e.g. your public key could be attached to your Facebook profile and people’s mail readers could pull that in automatically to validate signatures, for instance. “Key exchange” isn’t harder than the other kinds of more-convenient information exchange that happen all the time on the Net.   And you could take a standard cert in whatever format your CA gives it to you and feed it into any software in an easy and standard way and have it start encrypting its communication.

Me to world – make it happen!

7 Comments

Filed under Security

Application Security Conference in Austin, TX

I thought I would take this opportunity to invite the agile admin readers to LASCON.   LASCON (Lonestar Application Security Conference) is happening in Austin, TX on October 29th, 2010. The conference is sponsored by OWASP (the Open Web App Security Project) and is an entire day of quality content on web app security.  We’ll be there!

The speaker list is still in the works, but so far we have two presentations from this years BlackHat conference, several published authors, and the Director for Software Assurance in the National Cyber Security Division of the Department of Homeland Security just to name a few, and that’s only the preliminary round of acceptances.

Do you remember a few years ago when there was a worm going around MySpace that infected user profile pages at the rate of over one million in 20 hours?  Yeah, the author of that worm is speaking at the conference.  How can you beat that?

I have been planning this conference for a few months and am pretty excited about it.  If you are can make it to Austin on October 29th, we would love to meet you at LASCON.

1 Comment

Filed under Conferences, Security

DevOps and Security

I remember some complaints about DevOps from a couple folks (most notably Rational Survivability) saying “what about security!  And networking!  They’re excluded from DevOps!”  Well, I think that in the agile collaboration world, people are only excluded to the extent that they refuse to work with the agile paradigm.  Ops used to be “excluded” from agile, not because the devs hated them, but because the ops folks themselves didn’t willingly go collaborate with the devs and understand their process and work in that way.  As an ops person, it was hard to go through the process of letting go of my niche of expertise and my comfortable waterfall process, but once I got closer to the devs, understood what they did, and refactored my work to happen in an agile manner, I was as welcome as anyone to the collaborative party, and voila – DevOps.

Frankly, the security and network arenas are less incorporated into the agile team because they don’t understand how to be (or in many cases, don’t want to be).  I’ve done security work and work with a lot of InfoSec folks – we host the Austin OWASP chapter here at NI – and the average security person’s approach embodies most of what agile was created to remove from the development process.  As with any technical niche there’s a lot of elitism and authoritarianism that doesn’t mesh well with agile.

But this week, I saw a great presentation at the Austin OWASP chapter by Andre Gironda (aka “dre”) called Application Assessments Reloaded that covered a lot of ground, but part of it was the first coherent statement I’ve seen about what agile security would look like.  I especially like his term for the security person on the agile team – the “Security Buddy!”  Who can not like their security buddy?  They can hate the hell out of their “InfoSec Compliance Officer,” though.

Anyway, he has a bunch of controversial thoughts (he’s known for that) but the real breakthroughs are acknowledging the agile process, embedding a security “buddy” on the team, and leveraging existing unit test frameworks and QA behavior to perform security testing as well.  I think it’s a great presentation, go check it out!

1 Comment

Filed under DevOps, Security

Advanced Persistent Threat, what is it and what can you do about it – TRISC 2010

Talk by James Ryan.

An Advanced Persistent Threat is basically a massively coordinated long term hack attack, often accomplished by nation states or other “very large” organizations, like a business looking for intellectual property and information.  They try to avoid getting caught because they have invested capital in the break in and want to avoid the re-break in.  APTs are often categorized by slow access to data.  They avoid doing things rapidly to avoid detection.

Targets.  There is a question about targets and who is being targeted.  Anything that is crucial infrastructure is targeted.  James Ryan says that we are losing the battle.  We are now fighting (as a nation) nation states with an organized crime type of feel.  We haven’t really found religion to make security happen.  We still treat security as a way to stop rogue 17 year old hackers.

The most prevalent ways to engage in APT is through spear phishing with malware.  The attacker at this point is looking for credentials (key loggers, fake website, …).  Then damage by doing data exfiltration, data tampering, shutdown capabilities.  One other way to avoid getting caught is have the APT get hired in the company.

APT uses zero-day threats and sits on them.  They them it to stay on the network.

We should think that the APT is always going to be on our network and they are going to get there regularly.  We can avoid risk to APT by doing the following.

  • Implement PKI on smartcards, enterprise wide (PKI is mathematically proven to be secure for the next 20 years)
  • Hardware based PKI, not software
  • Implement network authentication and enterprise single sign on eSSO with PKI
  • Remote access tied to PKI keycard/smartcard
  • Implement Security Event Information Management and correlate accounts and run triggers on multiple simultaneous session trigger.  Also tie this with physical access control.
  • Implement PKI with privileged users as well (admins, power users)
  • Decrease access per person and evaluate and change
  • Create email tagging from external (avoid spear phishing)
  • Training and testing using spear phishing in the organization
  • Implement USB control to stop external USB
  • Background checks and procedures

James Ryan spent time talking about PKI and the necessity of using it.  I agree that we need to have better user management and if you operate on the assumption that Advanced Persistent Threat operators try to go undetected for a long amount of time and also try to get valid user credentials then it is even more so.  The thing that we need to do is control users and access.  This is our biggest vector.

Takeaways:

  • APT is real and dangerous
  • Assume network is owned already
  • Communicate in terms of business continuity
  • PKI should be part of the plan
  • Use proven methods for executing your strategy

Leave a comment

Filed under Conferences, Security

Understanding and Preventing Computer Espionage – TRISC 2010

Talk given at TRISC 2010 by Kai Axford from Accretive

Kai has delivered over 300 presentations and he is a manager at an IT solutions company.  Background at Microsoft in security.

Kai starts with talking about noteworthy espionage events:

  • Anna Chapman.  The Russian spy that recently got arrested.  Pulls up her facebook and linkedin page.  Later in the talk he goes into the adhoc wireless network she setup to transfer files to other intelligence agents.
  • Gary Min (aka Yonggang Min) is a researcher at DuPont.  He accessed over 22,000 abstracts and 16,706 documents from the library at DuPont.  He downloaded 15x more documents than anyone else.  Gary was printing the documents instead of transferring on a USB.  Risk to DuPont was $400,000,000.  He got a $30,000 fine.
  • Jerome Kerviel was a trader that worked in compliance before he started abusing the company. Stock trading and was using insider knowledge to abuse trading.

Cyberespionage is a priority in China’s five-year plan.  Acquire Intellectual Property and technology for China.  R&D is at risk from tons of international exposure.  Washington Post released a map of all top secret government agencies and private in the US.  http://projects.washingtonpost.com/top-secret-america/map/

Microsoft and 0-day from SCADA is another example.  SCADA is a very dangerous area for us.  2600 did a recent piece about SCADA.

Lets step back and take a look at the threat.  Insiders.  They are the ones that will take our data and information.  The insider is a greater risk.  We are all worried about the 17-year old kid in Finland, but it is really insiders.

There is a question of, if you gave your employees access to something, are they ‘breaking in’ when they access a bunch of data and take it home with them?

Types of users:

  • Elevated users who have been with the company for a long time
  • Janitors and cleaning crew
  • Insider affiliate (spouse, girlfriend)
  • Outside affiliate

Why do people do this?

  • Out of work intelligence operators world wide
  • Risk and reward is very much out of skew.  Penalties are light.
  • Motivators: MICE (Money, Ideology, Coercion, Ego)

In everyone that does espionage, there is a trigger.  There is something that makes it happen.  Carnegie-Mellon did some research stating that everyone who was stealing data had someone else that knew it was happening.

Tools he mentioned and I don’t know where else to mention them:

  • Maltego
  • USB U3 tool.  Switchblade downloads docs upon plugin.  Hacksaw sets up smtp and stunnel to send out all the docs outbound of the computer.
  • Steganography tools >  S-Tools.  This is what Anna was doing by putting info in images.
  • Cell phone bridge between laptop and network.
  • Tor

Mitigate using:

  • defense in depth
  • Background checks, Credit Check,
  • gates, guards, guns,
  • shredding and burning of docs
  • clean desk policy
  • locks, cameras
  • network device blocking
  • encryption devices
  • Application Security.
  • Enterprise Rights Management.
  • Data classification.

1 Comment

Filed under Conferences, Security

Mac OSX Forensics (and security) – TRISC 2010

This talk was presented by Michael Harvey from Avansic.

This has little to do with my day job, but I am big fan of Mac and have really enjoyed using it for the last several years both personally and professionally.  Security tools are also really great for the Apple platform, which I use using Mac Ports: nmap, wireshark, fping, metasploit… Enough about me, on to the talk.

There is a lot of objection about doing forensics on Macs and it is really needed, but in reality it is about 10% of the compute base and a lot of higher level officers in a company are using Macs because they can do whatever they want and aren’t subject to IT restrictions.

Collection of data is the most important.  In a Mac, just pulling the hard drive can be difficult.  Might be useful to pre-download the PDFs on how to do this.  You want to use a firewire write-blocker to copy the drive.  Live CDs (Helix and Raptor LiveCD) lets you copy the data and write block.  Michael really likes Raptor because of its support for legacy Macs and Intel based Macs.  In a follow-up conversation with him he emphasized how Raptor is great for people that don’t do forensics all the time.

Forensics cares about Modified, Accessed, Created time stamps.  Macs add on a time stamp called “Birth Time.”  This is the real created date.  Look at the file properties.  You can use SleuthKit (Open Source forensics tool) to assemble a timeline with M-A-C and Birth Time.

Macs use .plist files in lieu of the windows registry that most people are familiar with. “Property List” files.  These can be ASCII, XML and Binary.  ASCII is pretty rare these days for plist files.  Macs more often dont use the standard epoch unix time and instead uses Jan 1, 2001.  Michael is releasing information on plist format.  Right now there is not a lot of documentation on it.  Plist is more or less equivalent to the windows registry.

Two ways to analyze plist: plutil.pl and Plist Edit Pro.

Dmg files.  Disk images, similar to iso or zip files.  Pretty much a dmg file is crucial for using a Mac.  We can keep an eye out for past-used dmg files to know what has been installed or created…

SQLite Databases.  Lightweight SQL database.  This is heavily used by Firefox, iPhone, or Mac apps.  This is real common on Macs.

Email.  Email forensics will usually come in three flavors: MS Entourage (Outlook), Mail.app, and Mozilla Thunderbird.  A good tool for this is Emailchemy and is forensically sound.  It takes in all the formats.

Useful plist File Examples to look at for more info

  • Installed Applications: ~/Library/Preferences/com.apple.finder.plist
  • CD/DVD Burning: ~/Library/Preferences/com.apple.DiskUtility.plist
  • Recent Accessed Docuents, Servers, and Aplications: ~/Library/Preferences/com.recentitems.plist
  • Safari History: ~/Library/Preferences/com.apple.Safari.plist
  • Safari Cache: ~/Library/Preferences/com.apple.Safari/cache.db
  • Firefox: didn’t get this one

Forensic Software that you can use

  • AccessData FTK3
  • Mac Forensics Lab
  • Sleuth Kit (great timeline)
  • Others exist

In conclusion, Mac OSX Investigations are not that scary.  Be prepared with hard drive removal guides and how to extract data off of them.  The best forensic imaging tool out there should be chosen by hardware speed (and firewire), write-blocking capabilities, ability to use dual-core.  You need to know your tools handle HFS+, Birthed times, plist files, dmg files, SQLite Databases.

Audience member asked about harddrive copying tool.  Michael recommends Tableau (sp?).

Here are some resources:

  • Apple Examiner – appleexaminer.com
  • Mac Forensics Lab Tips – macforensicslab.com
  • Access Data – accessdata.com
  • Emailchemy – weirdkid.com/products/emailchemy

File Juicer.  Extracts info from databases used by browsers for cache.  Favicons are a good browser history tool…  You can point File Juicer at a SQLite Database or .dmg files.

Also, talking with Michael afterward ended with two book recommendations: SysInternals for Mac OSX and Mac OSX Forensics (unsure of title but it includes a DVD).
All in all, a really interesting talk and I look forward to seeing what else Mike produces in this arena.

Leave a comment

Filed under Conferences, Security

Pen Testing, DNSSEC, Enterprise Security Assessments – TRISC Day 1 Summary

Yesterday’s TRISC event had some great talks. The morning talks were good and were higher-level keynotes that, to be honest, I didn’t take good notes on. The talk on legal implications for the IT industry was really interesting. I was able to talk with Dr. Gavin Manes (a fellow Oklahoman) about legal implications of cloud computing and shared compute resources. In the old days, a lawyer was able to get physical access to the box and use it as evidence but it sounds like with the growth of SaaS that the courts don’t expect have to have physical box access but the law seems to be 5 to 10 yrs behind on this and it could backfire on us.

The three classes I attended in the afternoon are added below. Some of the notes are only partially complete, so take it for what it is: notes. Interspersed with the notes are my comments, but unlike my astute colleague Ernest, I didn’t delineate my comments with italics. So, pre-apology to any speakers if they feel like I am putting words there that they didnt say. If there are any incorrect statements, please feel free to leave a comment and I will get it fixed up, but hopefully I captured the sessions in spirit.

Breaking down the Enterprise Security Assessment by Michael Farnum

Michael Farnum did a great job with this session. If you want to follow him on twitter, his id is @m1a1vet and he blogs over at infosecplace.com/blog.

External Assessments are crucial for compliance and really for just actual security. We can’t be all about compliance only. One of the main premises of the talk is to avoid assumptions. Ways to do that in the following categories are below.

In Information Gathering check for nodes even if you think they don’t exist:

  • Web Servers. Everything has a web server nowadays. Router, check. Switch, check. Fridge, check.
  • Web Applications and URLs
  • Web app with static content (could be vulnerable even if you have a dummy http server). Might have apps installed that you didn’t even know (mod_php)
  • Other infrastructure nodes. Sometimes we assume what we have in the infrastructure… Don’t do that

In addition to regular testing, we need to remember wireless and how it is configured. Most companies have a open wireless network that goes just to the internet. The question that needs to be addressed in an assessment is: is it really segmented? For this reason we need to make sure that wireless has an IDS tied to it.

Basic steps of any assessments are identification and penetration. We don’t need to always penetrate if we have the knowledge of what we are doing but we do need to make sure that we identify properly.  No use in penetrating if you can show that the wireless node allows WEP or your shopping cart allows non-https.

Culture issues are also something that we need to watch out for. Discussing security assessments with Windows and Linux people generally ends with agreeable and disagreeable dialogs respectively when talking with contractors and vendors.

Doing Network Activity Analysis

  • Threat > malicious traffic – Actually know what the traffic is
  • Traffic > policy compliance – don’t assume that the tools keep you safe
    Applications
  • Big security assumptions. Not internally secured apps. Too much reliance on firewalls. CSRF, XSS and DNS Rebinding work w/o firewalls stopping them.
  • Browsers need to be in scope

What is up with security guys trying to scare people with social engineering? Michael says why bother doing social engineering if you don’t have a security training and awareness program. He guarantees you will fail. Spend the money elsewhere.

The Gap Analysis of physical security includes video and lighting. The operations team will probably hate you for it though. Getting into “their” area… Be careful when testing physical security (guards, cameras, fences) w/o involving physical ops team.

Reviews and interviews need to happen with developers, architecture team, security coverage, and compliance. At the end of an assessment, you need to do remediation, transfer knowledge with with workshops, presentations, documentation, and scheduling a verification testing to make sure it is fixed. While it makes more money to do point in time evaluation without follow-up (because you can do the same review next year and say, “yep, its still broken and you didn’t fix it) it is better to get your customers actually secure and verify that they take the next steps.

Actual Security versus Compliance Security.

DNSSEC: What you don’t know will hurt you by Dean Bushmiller

This talk was very interesting to me because of my interest with DNS and DNS Rebinding.  Dean passed out notes on this, so my notes are a little light, however I will see if I can post his slides here. But here are my notes for further research.

Read the following RFCs: DNS 1034, 1035 and DNSSEC 4033, 4034, 4035.

One of the big takeaways is that DNSSEC is meant to solve the integrity issues with DNS and does not solve confidentiality at all. It just verifies integrity.

All top-level domains are signed now, so when reading DNSSEC material online, ignore the island talk. A good site to check out is root-DNSSEC.org.

DNSSEC works by implementing PKI. One of the problems that people will face is key expiration. Screw that up and your site will be unavailable. Default is 30-day expiration period.

DNSSEC has a nonexistent domain protection. Subdomains are chained together in a circular logic and there is no way for a bad guy to add in a subdomain in the middle. This dumps all subdomains… All of them. All domains are enumerated and could make it easier for a malicious user to look at all your subdomains. They can already do this now, but this should prevent injection of bad subdomains into your domain.

An Introduction to Real Pen Testing: What you don’t learn at DefCon by Chip Meadows

What is a Penetration Test?

  • Authorized test of the target (web app, network, system)
  • Testing is the attempt to exploit vulnerabilities
  • Not a scan, but a test
  • Scanners like Saint and Nessus are part of a test but they are not the test, they are just a scan

Why Pen Test?

  • Gain a knowledge of the true security posture of the first
  • Satisfy regulatory requirements
  • Compare past and present

PCI is not the silver bullet. Doesn’t really keep us secure.

Chip had a lot of other points that mimicked Michael Farnum’s earlier talk and they have been redacted from here, but he did mention the following tools and link that are also great for security guys to check out.

Testing Tools

  • fling -ag ip add > Feed into scanner
  • Hydra
  • Msswlbruteforcer
  • ikescan
  • nikto
  • burpsuite
  • dir buster
  • metasploit
  • firewalk
  • burp suite

http://vulnerabilityassessment.co.uk/Penetration%20Test.html

Wrap up

The talk that I was most interested in was the DNSSEC talk, but the most useful talks for most people are the security assessments and pen testing talks.  I have been thinking about writing a talk on Agile Security and about how to integrate security with Agile development methods.  Look for that in the near future.

One other note,  I am testing my new setup made just for conferences. Well, I can use it for other things too, but I always worry about ‘open’ networks at hotels especially at security conferences. What I have done is setup dd-wrt on my home router with OpenVPN running on it as well.  From my laptop (Mac Pro) I run Tunnelblick and get a VPN connection back home.  This is cool because if someone is watching the traffic they will just see an encrypted stream from my laptop.  That way, I don’t have to worry about whether or not they have WPA or just a plain open connection.  All my traffic is encrypted at that point.  OpenVPN was a little difficult to get setup and I found a lot of conflicting documentation, let me know and maybe I can piece together some instructions for the blog.

1 Comment

Filed under Conferences, Security