Author Archives: wickett

wickett's avatar

About wickett

James is a leader in the DevOps and InfoSec communities--most of his research and work is at the intersection of these two communities. He is a supporter of the Rugged Software movement and he coined the term Rugged DevOps. Seeing the gap in software testing, James founded an open source project, Gauntlt, to serve as a Rugged Testing Framework. He is the author of the Hands-on Gauntlt book. He got his start in technology when he founded a Web startup as a student at University of Oklahoma and since then has worked in environments ranging from large, web-scale enterprises to small, rapid-growth startups. He is a dynamic speaker on topics in DevOps, InfoSec, cloud security, security testing and Rugged DevOps. James is the creator and founder of the Lonestar Application Security Conference which is the largest annual security conference in Austin, TX. He is a chapter leader for the OWASP Austin chapter and he holds the following security certifications: CISSP, GWAPT, GCFW, GSEC and CCSK and he serves on the GIAC Advisory Board. In his spare time he is raising kids and trying to learn how to bake bread.

The Rise of the Security Industry

In late 2007 Bruce Schneier, the internationally renowned security technologist and author, wrote an article for IEEE Security & Privacy. The ominously named article: The Death of the Security Industry predicted the future of the security industry or lack thereof.  In it he predicts that we would treat security as merely a utility like we use water and power today.  The future is one where “large IT departments don’t really want to deal with network security. They want to fly airplanes, produce pharmaceuticals, manage financial accounts, or just focus on their core business.”

Schneier closes with, “[a]s IT fades into the background and becomes just another utility, users will simply expect it to work. The details of how it works won’t matter.”

Looking back 3 years and having the luxury of hindsight, it is understandable to see why he thought the security industry would become a utility.  In part, it has become true.  Utility billing is the rage for infrastructure (hello cloud computing) and more and more people are viewing the network as a commodity.  Bandwidth has increased in performance and decreased in cost.  Continually people are outsourcing pieces of their infrastructure and non-critical IT services to vendors or to offshore employees.

But there are three reasons why I disagree with the The Death of the Security Industry and I believe we are actually going to see a renaissance of the security industry over the next decade.

1. Data is valuable. We can’t think of IT as merely the computers and network resources we use.  We need to put the ‘I’ back in IT and remember why we play this game in the first place.  Information.  Protecting the information (data) will be crucial over the long haul.  Organizations do not care about new firewalls or identity management as a primary goal, however they do care about their data.  Data is king.  Organizations that succeed will be ones that master navigating a new marketplace that values sharing while keeping their competitive edge by safe-guarding and protecting their critical data.

2. Security is a timeless profession. When God gave Adam and Eve the boot from the Garden of Eden, what did he  do next?   He used a security guard to keep them out of the Garden for good.  Security has been practiced as long as people have been people.  As long as you have something worth protecting (see ‘data is valuable’ in point 1) you will need resources to protect it.   Our valuable data is being transferred, accessed and modified on computing devices and will need to be protected.  If people can’t trust that their data is safe then they will not be our customers.  The CIA security triad (Confidentiality, Integrity, and Availability) needs to remain in tact for consumers to trust organizations with their data and if that data has any value to the organization, it will be need to be protected.

3. Stuxnet. This could be called the dawn of a new age of hacking.  Gone are the days of teenagers running port scans from their garages. Be ready to start seeing hackers using sophisticated techniques that simultaneously attack multiple vectors to gain access on their targets.  I am not going to spread FUD (Fear Uncertainty and Doubt) around, but I believe that Stuxnet is just the beginning.

In addition to how Stuxnet was executed, it is just as interesting to see what was attacked.  This next decade will prove to be a change in the type of targets attacked.  In the 80’s it was all about hacking phones and more physical targets, the 90’s were the days of the port-scanning and Microsoft Windows hacking, the last decade has primarily focused on web and application data.  With Stuxnet, we are seeing the revitalization of hacking where it is returning to its roots of hacking targets that are physical in nature such as SCADA systems that control a building’s temperature systems.  The magazine 2600 has been publishing a series on SCADA hacking over the last 18 months.  What makes it even more interesting is that almost every device you buy these days has a web interface on it, so never fear, the last 10 years spent hacking websites will come in real handy when looking at hacking control systems.

In closing, I think we are a long way off from seeing the death of the security industry.  As our data becomes more valuable, the more we will need to secure.  Data is on the rise and with it comes the need for security.  Additionally as more and more of our world is controlled with computers, the targets become more and more interesting.  Be ready for the rise of the security industry.

Let me know what you think on twitter: @wickett

1 Comment

Filed under Security

OPSEC + Agile = Security that works

Recently I have been reading on OPSEC (operations security).  OPSEC, among many things, is a process for security critical information and reducing risk.  The 5 steps in the OPSEC process read as follows:

  1. Identify Critical Information
  2. Analyze the Threat
  3. Analyze the Vulnerabilities
  4. Assess the Risk
  5. Apply the countermeasures

It really isn’t rocket science, but it is the sheer simplicity of the process that is alluring.  It has traditionally been applied in the military and has been used as a meta-discipline in security.  It assumes that other parties are watching, sort of like the aircraft watchers that park near the military base to see what is flying in and out, or the Domino’s near the Pentagon that reportedly sees a spike in deliveries to the Pentagon before a big military strike.  Observers are gathering critical information on your organization in new ways that you weren’t able to predict.  This is where OPSEC comes in.

Since there is no way to predict what data will be leaking from your organization in the future and it is equally impossible to enumerate all possible future risk scenarios, then it becomes necessary to perform this assessment regularly.  Instead of using an annual review process with huge overhead and little impact (I am looking at you, Sarbanes-Oxley compliance auditors), you can create a process to continue to identify risks in an ever-changing organization while lessening risk.  This is why you have a security team, right?  Lessening the risk to the organization is the main reason to have a security team.  Achieving PCI or HIPPA compliance is not.

Using OPSEC as a security process poses huge benefits when aligned with Agile software development principles.  The following weekly assessment cycle is promoted by SANS in their security training course.  See if you can see Agile in it.

The weekly OPSEC assessment cycle:

  1. Identify Critical Information
  2. Assess threats and threat sources including: employees, contractors, competitors, prospects…
  3. Assess vulnerabilities of critical information to the threat
  4. Conduct risk vs. benefit analysis
  5. Implement appropriate countermeasures
  6. Do it again next week.

A weekly OPSEC process is a different paradigm from the annual compliance ritual.  The key of the security is just that: lessen risk to the organization.   Iterating through the OPSEC assessment cycle weeklymeans that you are taking frequent and concrete steps to facilitate that end.

Leave a comment

Filed under Security

Cloud Security: a chicken in every pot and a DMZ for every service

There are a couple military concepts that have bled into technology and in particular into IT security, a DMZ being one of them. A Demilitarized Zone (DMZ) is a concept where there is established control for what comes in and what goes out between two parties and in military terms you can establish a “line of control” by using a DMZ. In a human-based DMZ, controllers of the DMZ make ingress (incoming) and egress (outgoing) decisions based on an approved list–no one is allowed to pass unless they are on the list and have proper identification and approval.

In the technology world, the same thing is done based on traffic between computers. Decisions to allow or disallow the traffic can be made based on where the traffic came from (origination), where it is going (destination) or even dimensions of the traffic like size, length, or even time of day. The basic idea being that all traffic is analyzed and is either allowed or disallowed based on determined rules and just like in a military DMZ, there is a line of control where only approved traffic is allowed to ingress or egress. In many instances a DMZ will protect you from malicious activity like hackers and viruses but it also protects you from other configuration and developer errors and can guarantee that your production systems are not talking to test or development tiers.

Lets look at a basic web tiered architecture. In a corporation that hosts its own website they will more than likely have the following four components: incoming internet traffic, a web server, a database server and an internal network. To create a DMZ or multiple DMZ instances to handle their web traffic, they would want to make sure that someone from the internet could only talk to the web server, they would also want to verify that only the web server can only talk to the database server and they would want to make sure that their internal network is inaccessible by the the web server, database server and the internet traffic.

Using firewalls, you would need to set up at least the below three firewalls to adequately control the DMZ instances:

1. A firewall between the external internet and the web server
2. A firewall in front of the internal network
3. A firewall between your web servers and database server

Of course these firewalls need to be set so that they allow (or disallow) only certain types of traffic. Only traffic that meets certain rules based on its origination, destination, and its dimensions will be allowed.

Sounds great, right? The problem is that firewalls have become quite complicated and now sometimes aren’t even advertised as firewalls, but instead are billed as a network-device-that-does-anything-you-want-that-can-also-be-a-firewall-too. This is due in part to the hardware getting faster, IT budgets shrinking and scope creep. The “firewall” now handles VPN, traffic acceleration, IDS duties, deep packet inspection and making sure your employees aren’t watching YouTube videos when they should be working. All of those are great, but it causes firewalls to be expensive, fragile and difficult to configure.  And to have the firewall watching all ingress and egress points across your network, you usually have to buy several devices to scatter throughout your topology.

Additionally, another recurring problem is that most firewall analysis and implementation is done with an intent to secure the perimeter. Which make sense, but it often stops there and doesn’t protect the interior parts of your network. Most IT security firms that do consulting and penetration tests don’t generally come through the “front door” by that I mean, they don’t generally try to get in through the front-facing web servers but instead they will go through other channels such as dial-in, wireless, partner, third-party services, social engineering, FTP servers or that demo system that was setup back 5 years ago that no hasn’t been taken down–you know the one I am talking about. Once inside, if there are no well-defined DMZs, then it is pretty much game-over because at that point there are no additional security controls. A DMZ will not fix all your problems, but it will provide an extra layer of protection that could protect you from malicious activity. And like I mentioned earlier it can also help prevent configuration errors crossing from dev to test to prod.

In short, a DMZ is a really good idea and should be implemented for every system that you have. The most optimal DMZ would be a firewall in front of each service that applies rules to determine what traffic is allowed in and out. This, however, is expensive to set up and very rarely gets implemented. That was the old days, this is now and good news, the cloud has an answer.

I am most familiar with Amazon Web Services so we will include an example of how do this with security groups from an AWS perspective. The following code creates a web server group and a database server group and allows the web server to talk to the database on port 3306 only.

ec2-add-group web-server-sec-group -d "this is the group for web servers" #This creates the web server group with a description
ec2-add-group db-server-sec-group -d "this is the group for db server" #This creates the db server group with a description
ec2-authorize web-server-sec-group -P tcp -p 80 -s 0.0.0.0/0 #This allows external internet users to talk to the web servers on port 80 only
ec2-authorize db-server-sec-group --source-group web-server-sec-group --source-group-user AWS_ACCOUNT_NUMBER -P tcp -p 3306 #This allows only traffic from the web server group on port 3306 (mysql) to ingress

Under the above example the database server is in a DMZ and only traffic from the web servers are allowed to ingress into it. Additionally, the web server is in a DMZ in that it is protected from the internet on all other ports except for port 80. If you were to implement this for every role in your system, you would be in effect implementing a DMZ between each layer and would provide excellent security protection.

The cloud seems to get a bad rap in terms of security. But I would counter that in some ways the cloud is more secure since it lets you actually implement a DMZ for each service. Sure, they won’t do deep packet analysis or replace an intrusion detection system, but they will allow you to specifically define what ingresses and egresses are allowed on each instance.  We may not ever get a chicken in every pot, but with the cloud you can now put a DMZ on every service.

Leave a comment

Filed under Cloud, Security

Application Security Conference in Austin, TX

I thought I would take this opportunity to invite the agile admin readers to LASCON.   LASCON (Lonestar Application Security Conference) is happening in Austin, TX on October 29th, 2010. The conference is sponsored by OWASP (the Open Web App Security Project) and is an entire day of quality content on web app security.  We’ll be there!

The speaker list is still in the works, but so far we have two presentations from this years BlackHat conference, several published authors, and the Director for Software Assurance in the National Cyber Security Division of the Department of Homeland Security just to name a few, and that’s only the preliminary round of acceptances.

Do you remember a few years ago when there was a worm going around MySpace that infected user profile pages at the rate of over one million in 20 hours?  Yeah, the author of that worm is speaking at the conference.  How can you beat that?

I have been planning this conference for a few months and am pretty excited about it.  If you are can make it to Austin on October 29th, we would love to meet you at LASCON.

1 Comment

Filed under Conferences, Security

Advanced Persistent Threat, what is it and what can you do about it – TRISC 2010

Talk by James Ryan.

An Advanced Persistent Threat is basically a massively coordinated long term hack attack, often accomplished by nation states or other “very large” organizations, like a business looking for intellectual property and information.  They try to avoid getting caught because they have invested capital in the break in and want to avoid the re-break in.  APTs are often categorized by slow access to data.  They avoid doing things rapidly to avoid detection.

Targets.  There is a question about targets and who is being targeted.  Anything that is crucial infrastructure is targeted.  James Ryan says that we are losing the battle.  We are now fighting (as a nation) nation states with an organized crime type of feel.  We haven’t really found religion to make security happen.  We still treat security as a way to stop rogue 17 year old hackers.

The most prevalent ways to engage in APT is through spear phishing with malware.  The attacker at this point is looking for credentials (key loggers, fake website, …).  Then damage by doing data exfiltration, data tampering, shutdown capabilities.  One other way to avoid getting caught is have the APT get hired in the company.

APT uses zero-day threats and sits on them.  They them it to stay on the network.

We should think that the APT is always going to be on our network and they are going to get there regularly.  We can avoid risk to APT by doing the following.

  • Implement PKI on smartcards, enterprise wide (PKI is mathematically proven to be secure for the next 20 years)
  • Hardware based PKI, not software
  • Implement network authentication and enterprise single sign on eSSO with PKI
  • Remote access tied to PKI keycard/smartcard
  • Implement Security Event Information Management and correlate accounts and run triggers on multiple simultaneous session trigger.  Also tie this with physical access control.
  • Implement PKI with privileged users as well (admins, power users)
  • Decrease access per person and evaluate and change
  • Create email tagging from external (avoid spear phishing)
  • Training and testing using spear phishing in the organization
  • Implement USB control to stop external USB
  • Background checks and procedures

James Ryan spent time talking about PKI and the necessity of using it.  I agree that we need to have better user management and if you operate on the assumption that Advanced Persistent Threat operators try to go undetected for a long amount of time and also try to get valid user credentials then it is even more so.  The thing that we need to do is control users and access.  This is our biggest vector.

Takeaways:

  • APT is real and dangerous
  • Assume network is owned already
  • Communicate in terms of business continuity
  • PKI should be part of the plan
  • Use proven methods for executing your strategy

Leave a comment

Filed under Conferences, Security

Understanding and Preventing Computer Espionage – TRISC 2010

Talk given at TRISC 2010 by Kai Axford from Accretive

Kai has delivered over 300 presentations and he is a manager at an IT solutions company.  Background at Microsoft in security.

Kai starts with talking about noteworthy espionage events:

  • Anna Chapman.  The Russian spy that recently got arrested.  Pulls up her facebook and linkedin page.  Later in the talk he goes into the adhoc wireless network she setup to transfer files to other intelligence agents.
  • Gary Min (aka Yonggang Min) is a researcher at DuPont.  He accessed over 22,000 abstracts and 16,706 documents from the library at DuPont.  He downloaded 15x more documents than anyone else.  Gary was printing the documents instead of transferring on a USB.  Risk to DuPont was $400,000,000.  He got a $30,000 fine.
  • Jerome Kerviel was a trader that worked in compliance before he started abusing the company. Stock trading and was using insider knowledge to abuse trading.

Cyberespionage is a priority in China’s five-year plan.  Acquire Intellectual Property and technology for China.  R&D is at risk from tons of international exposure.  Washington Post released a map of all top secret government agencies and private in the US.  http://projects.washingtonpost.com/top-secret-america/map/

Microsoft and 0-day from SCADA is another example.  SCADA is a very dangerous area for us.  2600 did a recent piece about SCADA.

Lets step back and take a look at the threat.  Insiders.  They are the ones that will take our data and information.  The insider is a greater risk.  We are all worried about the 17-year old kid in Finland, but it is really insiders.

There is a question of, if you gave your employees access to something, are they ‘breaking in’ when they access a bunch of data and take it home with them?

Types of users:

  • Elevated users who have been with the company for a long time
  • Janitors and cleaning crew
  • Insider affiliate (spouse, girlfriend)
  • Outside affiliate

Why do people do this?

  • Out of work intelligence operators world wide
  • Risk and reward is very much out of skew.  Penalties are light.
  • Motivators: MICE (Money, Ideology, Coercion, Ego)

In everyone that does espionage, there is a trigger.  There is something that makes it happen.  Carnegie-Mellon did some research stating that everyone who was stealing data had someone else that knew it was happening.

Tools he mentioned and I don’t know where else to mention them:

  • Maltego
  • USB U3 tool.  Switchblade downloads docs upon plugin.  Hacksaw sets up smtp and stunnel to send out all the docs outbound of the computer.
  • Steganography tools >  S-Tools.  This is what Anna was doing by putting info in images.
  • Cell phone bridge between laptop and network.
  • Tor

Mitigate using:

  • defense in depth
  • Background checks, Credit Check,
  • gates, guards, guns,
  • shredding and burning of docs
  • clean desk policy
  • locks, cameras
  • network device blocking
  • encryption devices
  • Application Security.
  • Enterprise Rights Management.
  • Data classification.

1 Comment

Filed under Conferences, Security

Mac OSX Forensics (and security) – TRISC 2010

This talk was presented by Michael Harvey from Avansic.

This has little to do with my day job, but I am big fan of Mac and have really enjoyed using it for the last several years both personally and professionally.  Security tools are also really great for the Apple platform, which I use using Mac Ports: nmap, wireshark, fping, metasploit… Enough about me, on to the talk.

There is a lot of objection about doing forensics on Macs and it is really needed, but in reality it is about 10% of the compute base and a lot of higher level officers in a company are using Macs because they can do whatever they want and aren’t subject to IT restrictions.

Collection of data is the most important.  In a Mac, just pulling the hard drive can be difficult.  Might be useful to pre-download the PDFs on how to do this.  You want to use a firewire write-blocker to copy the drive.  Live CDs (Helix and Raptor LiveCD) lets you copy the data and write block.  Michael really likes Raptor because of its support for legacy Macs and Intel based Macs.  In a follow-up conversation with him he emphasized how Raptor is great for people that don’t do forensics all the time.

Forensics cares about Modified, Accessed, Created time stamps.  Macs add on a time stamp called “Birth Time.”  This is the real created date.  Look at the file properties.  You can use SleuthKit (Open Source forensics tool) to assemble a timeline with M-A-C and Birth Time.

Macs use .plist files in lieu of the windows registry that most people are familiar with. “Property List” files.  These can be ASCII, XML and Binary.  ASCII is pretty rare these days for plist files.  Macs more often dont use the standard epoch unix time and instead uses Jan 1, 2001.  Michael is releasing information on plist format.  Right now there is not a lot of documentation on it.  Plist is more or less equivalent to the windows registry.

Two ways to analyze plist: plutil.pl and Plist Edit Pro.

Dmg files.  Disk images, similar to iso or zip files.  Pretty much a dmg file is crucial for using a Mac.  We can keep an eye out for past-used dmg files to know what has been installed or created…

SQLite Databases.  Lightweight SQL database.  This is heavily used by Firefox, iPhone, or Mac apps.  This is real common on Macs.

Email.  Email forensics will usually come in three flavors: MS Entourage (Outlook), Mail.app, and Mozilla Thunderbird.  A good tool for this is Emailchemy and is forensically sound.  It takes in all the formats.

Useful plist File Examples to look at for more info

  • Installed Applications: ~/Library/Preferences/com.apple.finder.plist
  • CD/DVD Burning: ~/Library/Preferences/com.apple.DiskUtility.plist
  • Recent Accessed Docuents, Servers, and Aplications: ~/Library/Preferences/com.recentitems.plist
  • Safari History: ~/Library/Preferences/com.apple.Safari.plist
  • Safari Cache: ~/Library/Preferences/com.apple.Safari/cache.db
  • Firefox: didn’t get this one

Forensic Software that you can use

  • AccessData FTK3
  • Mac Forensics Lab
  • Sleuth Kit (great timeline)
  • Others exist

In conclusion, Mac OSX Investigations are not that scary.  Be prepared with hard drive removal guides and how to extract data off of them.  The best forensic imaging tool out there should be chosen by hardware speed (and firewire), write-blocking capabilities, ability to use dual-core.  You need to know your tools handle HFS+, Birthed times, plist files, dmg files, SQLite Databases.

Audience member asked about harddrive copying tool.  Michael recommends Tableau (sp?).

Here are some resources:

  • Apple Examiner – appleexaminer.com
  • Mac Forensics Lab Tips – macforensicslab.com
  • Access Data – accessdata.com
  • Emailchemy – weirdkid.com/products/emailchemy

File Juicer.  Extracts info from databases used by browsers for cache.  Favicons are a good browser history tool…  You can point File Juicer at a SQLite Database or .dmg files.

Also, talking with Michael afterward ended with two book recommendations: SysInternals for Mac OSX and Mac OSX Forensics (unsure of title but it includes a DVD).
All in all, a really interesting talk and I look forward to seeing what else Mike produces in this arena.

Leave a comment

Filed under Conferences, Security

Pen Testing, DNSSEC, Enterprise Security Assessments – TRISC Day 1 Summary

Yesterday’s TRISC event had some great talks. The morning talks were good and were higher-level keynotes that, to be honest, I didn’t take good notes on. The talk on legal implications for the IT industry was really interesting. I was able to talk with Dr. Gavin Manes (a fellow Oklahoman) about legal implications of cloud computing and shared compute resources. In the old days, a lawyer was able to get physical access to the box and use it as evidence but it sounds like with the growth of SaaS that the courts don’t expect have to have physical box access but the law seems to be 5 to 10 yrs behind on this and it could backfire on us.

The three classes I attended in the afternoon are added below. Some of the notes are only partially complete, so take it for what it is: notes. Interspersed with the notes are my comments, but unlike my astute colleague Ernest, I didn’t delineate my comments with italics. So, pre-apology to any speakers if they feel like I am putting words there that they didnt say. If there are any incorrect statements, please feel free to leave a comment and I will get it fixed up, but hopefully I captured the sessions in spirit.

Breaking down the Enterprise Security Assessment by Michael Farnum

Michael Farnum did a great job with this session. If you want to follow him on twitter, his id is @m1a1vet and he blogs over at infosecplace.com/blog.

External Assessments are crucial for compliance and really for just actual security. We can’t be all about compliance only. One of the main premises of the talk is to avoid assumptions. Ways to do that in the following categories are below.

In Information Gathering check for nodes even if you think they don’t exist:

  • Web Servers. Everything has a web server nowadays. Router, check. Switch, check. Fridge, check.
  • Web Applications and URLs
  • Web app with static content (could be vulnerable even if you have a dummy http server). Might have apps installed that you didn’t even know (mod_php)
  • Other infrastructure nodes. Sometimes we assume what we have in the infrastructure… Don’t do that

In addition to regular testing, we need to remember wireless and how it is configured. Most companies have a open wireless network that goes just to the internet. The question that needs to be addressed in an assessment is: is it really segmented? For this reason we need to make sure that wireless has an IDS tied to it.

Basic steps of any assessments are identification and penetration. We don’t need to always penetrate if we have the knowledge of what we are doing but we do need to make sure that we identify properly.  No use in penetrating if you can show that the wireless node allows WEP or your shopping cart allows non-https.

Culture issues are also something that we need to watch out for. Discussing security assessments with Windows and Linux people generally ends with agreeable and disagreeable dialogs respectively when talking with contractors and vendors.

Doing Network Activity Analysis

  • Threat > malicious traffic – Actually know what the traffic is
  • Traffic > policy compliance – don’t assume that the tools keep you safe
    Applications
  • Big security assumptions. Not internally secured apps. Too much reliance on firewalls. CSRF, XSS and DNS Rebinding work w/o firewalls stopping them.
  • Browsers need to be in scope

What is up with security guys trying to scare people with social engineering? Michael says why bother doing social engineering if you don’t have a security training and awareness program. He guarantees you will fail. Spend the money elsewhere.

The Gap Analysis of physical security includes video and lighting. The operations team will probably hate you for it though. Getting into “their” area… Be careful when testing physical security (guards, cameras, fences) w/o involving physical ops team.

Reviews and interviews need to happen with developers, architecture team, security coverage, and compliance. At the end of an assessment, you need to do remediation, transfer knowledge with with workshops, presentations, documentation, and scheduling a verification testing to make sure it is fixed. While it makes more money to do point in time evaluation without follow-up (because you can do the same review next year and say, “yep, its still broken and you didn’t fix it) it is better to get your customers actually secure and verify that they take the next steps.

Actual Security versus Compliance Security.

DNSSEC: What you don’t know will hurt you by Dean Bushmiller

This talk was very interesting to me because of my interest with DNS and DNS Rebinding.  Dean passed out notes on this, so my notes are a little light, however I will see if I can post his slides here. But here are my notes for further research.

Read the following RFCs: DNS 1034, 1035 and DNSSEC 4033, 4034, 4035.

One of the big takeaways is that DNSSEC is meant to solve the integrity issues with DNS and does not solve confidentiality at all. It just verifies integrity.

All top-level domains are signed now, so when reading DNSSEC material online, ignore the island talk. A good site to check out is root-DNSSEC.org.

DNSSEC works by implementing PKI. One of the problems that people will face is key expiration. Screw that up and your site will be unavailable. Default is 30-day expiration period.

DNSSEC has a nonexistent domain protection. Subdomains are chained together in a circular logic and there is no way for a bad guy to add in a subdomain in the middle. This dumps all subdomains… All of them. All domains are enumerated and could make it easier for a malicious user to look at all your subdomains. They can already do this now, but this should prevent injection of bad subdomains into your domain.

An Introduction to Real Pen Testing: What you don’t learn at DefCon by Chip Meadows

What is a Penetration Test?

  • Authorized test of the target (web app, network, system)
  • Testing is the attempt to exploit vulnerabilities
  • Not a scan, but a test
  • Scanners like Saint and Nessus are part of a test but they are not the test, they are just a scan

Why Pen Test?

  • Gain a knowledge of the true security posture of the first
  • Satisfy regulatory requirements
  • Compare past and present

PCI is not the silver bullet. Doesn’t really keep us secure.

Chip had a lot of other points that mimicked Michael Farnum’s earlier talk and they have been redacted from here, but he did mention the following tools and link that are also great for security guys to check out.

Testing Tools

  • fling -ag ip add > Feed into scanner
  • Hydra
  • Msswlbruteforcer
  • ikescan
  • nikto
  • burpsuite
  • dir buster
  • metasploit
  • firewalk
  • burp suite

http://vulnerabilityassessment.co.uk/Penetration%20Test.html

Wrap up

The talk that I was most interested in was the DNSSEC talk, but the most useful talks for most people are the security assessments and pen testing talks.  I have been thinking about writing a talk on Agile Security and about how to integrate security with Agile development methods.  Look for that in the near future.

One other note,  I am testing my new setup made just for conferences. Well, I can use it for other things too, but I always worry about ‘open’ networks at hotels especially at security conferences. What I have done is setup dd-wrt on my home router with OpenVPN running on it as well.  From my laptop (Mac Pro) I run Tunnelblick and get a VPN connection back home.  This is cool because if someone is watching the traffic they will just see an encrypted stream from my laptop.  That way, I don’t have to worry about whether or not they have WPA or just a plain open connection.  All my traffic is encrypted at that point.  OpenVPN was a little difficult to get setup and I found a lot of conflicting documentation, let me know and maybe I can piece together some instructions for the blog.

1 Comment

Filed under Conferences, Security

TRISC 2010 – Texas Regional Infrastucture Security Conference

TRISC starts today. Only one of the Agile Admins is up in Dallas today, and there are some pretty good speakers lined up today with some really interesting talks.

I am looking forward to talks on DNSSEC, Pen Testing, and a talk from Robert Hansen.

Stay tuned for more TRISC coverage and in the interim, feel free to follow the coverage on my twitter account.

Leave a comment

Filed under Conferences, General, Security

Austin Cloud Camp Wrap-up

Austin recently had a CloudCamp and my guess is that it drew in close to 100 attendees.

Before I get into the actual event, let me start this post with a brief story.

During the networking time, I committed one of the worst networking faux pas that one can make when networking: I tried a lame joke upon meeting someone new. One of the other attendees asked me why my company was interested in CloudCamp. I sarcastically replied to his inquisition by explaining that we were really excited about CloudCamp because we do a lot of work with weather instrumentation. Anything to do with clouds, we are so there… Silence.

Blink.

Another blink…. Fail.

At this point I explain that I am an idiot and making sarcastic jokes that fail all the time and I duck out to a different conversation. So, forgetting about my awkward sense of humor, lets move on. Learn from me, don’t make weather jokes at a CloudCamp.

Notes from CloudCamp Austin

At any event, one of the best things that can happen is meeting people in your field. I was able to meet some cool guys in Austin with ServiceMesh and Pervasive. There are also beginning plans to start an AWS User Group in Austin which will be really awesome. Ping me if you want the scoop and I will let you know as I find anything out about it.

The talk I attended was led by the agile admin’s very own: Ernest Mueller. The notes from it are below.

Systems Management in the Cloud

One of the discussion points was how people were implementing dynamic scaling and what infrastructure they are wrapping around that.

Tools people are using in the cloud to achieve dynamic scaling in Amazon Web Services (AWS):
OSSEC for change control and security
Ganglia for reporting
Collectd for monitoring
– Cron tasks for other reporting and metric gathering
Pentaho and Jasper for metrics
– RESTful interface for the managed services layer. Reporting also gets done via RESTful service.
Quartz scheduler to do scaling with metrics around what collectd is monitoring.

When monitoring, we have to start by understanding the perspective of the customers and then try to wrap monitors around that. Are we focused on user or provider? Infrastructure monitoring or application monitoring? The creator of the application that is deployed to the cloud and the environment can provide hooks for the monitoring platform. Which means that developers need to be looking on the horizon of ops early in the development phase.

This is a summary of what I saw at CloudCamp Austin, but I would love to hear what other sessions people went to and what the big takeaways were for them.

Leave a comment

Filed under Cloud, DevOps