Monthly Archives: July 2010

Austin Cloud Computing Users Group!

The first meeting of the Austin Cloud Computing Users Group just happened this Tuesday, and it was a good time!  This new effort is kindly hosted by Pervasive Software [map].  We had folks from all over attend – Pervasive (of course), NI (4 of us went), Dell, ServiceMesh, BazaarVoice, Redmonk, and Zenoss just to name a few.  There are a lot of heavy hitters here in Austin because our town is so lovely!

We basically just introduced ourselves (there were like 50 people there so that took a while) and talked about organization and what we wanted to do.

The next meeting is planned already; it will be at Pervasive from 6:00 to 8:00 PM on Tuesday, August 24.  Michael Coté of Redmonk will be speaking on cloud computing trends.  Meeting format will be a presentation followed by lightning talks and self-forming unconference sessions.  Companies will be buying food and drink for the group in return for a 5 minute “pimp yourself” slot.  Mmm, free dinner.

There is a Google group/mailing list you can join – austin-cug@googlegroups.com.  There’s already some good discussion underway, so join in, and come to the next meeting!

Leave a comment

Filed under Cloud, DevOps

Give Me An API Or Give Me Death

Catchy phrase courtesy #meatcloud…   But it’s very true.  I am continuously surprised by the chasm between the “old generation” of software that jealously demands its priests stay inside the temple, and the “new generation” that lets you do things via API easily.  As we’ve been building up a new highly dynamic cloud-based system, we’ve been forced to strongly evaluate our toolset and toss out products with strong “functionality” that can’t be managed well in an automated infrastructure.

Let me say this.  If your product requires either a) manual GUI operations or b) a config file alteration and restart, it is not suitable for the new millenium.  That’s just a fact.

We needed an LDAP server to hold our auth information.  It’s been a while since I’ve done that, so of course OpenLDAP immediately came to mind.  So we tried it.  But what happens when you want to dynamically add a new replication slave?  Oh, you edit a bunch of config files and restart.  Well, sure, I’d like my auth system to be offline all the time, but…  So we tried OpenDS.  The most polished thing in the world?  No.  Does it have all the huge amount of weird functionality I probably won’t use anyway of OpenLDAP?  No.  But it does have an administration interface that you can issue directives to and have them take hold in realtime.  “Hey dude start replicating with that new box over there OK?”  “Sir, yes sir.”  “Outstanding.”  And since it’s Java, I can deploy it easily to targets in an automated fashion.  And even though the docs aren’t all up to date and sometimes you have to go through their interactive command line interface to do something – once you do it, the interface can be told to spit out the command-line version of that so you can automate it.  Sold!

The monitoring world is like this too.  Oh, we need an open source monitoring system?  Like everyone else, Nagios comes first to mind.  But then you try to manage a dynamic environment with it.  Again, their “solution” is to edit config files and restart parts of the system.  I don’t know about you, but my monitoring systems tend to be running a LOT of tests at any given time and hiccups in that make Baby Jesus (and frequently whoever is on call) cry.  So we start looking at other options.  “Well, you just come here in the UI and click to add!” the sales rep says proudly.  “Click,” goes the phone.  We end up looking at stuff like Zabbix, Zenoss, etc.  In fact, at least for the short term, we are using Cloudkick.  In terms of the depth of monitoring, it supports 1/100 of what most monitoring solutions do.  System stats mostly; there’s plugins for LDAP and mySQL but that’s about it, the rest is “here’s where you can plug in your own custom agent plugin…”  But, as my systems come up they get added to their interface automatically, tagged with my custom namespace.  And I’d rather have my systems IN a monitoring system that will give me 10 metrics than OUTSIDE a monitoring system that would give me 1000.

It’s also about agility.  We are trying to get these products to market way fast.  We don’t have time to become high priests of the “OpenLDAP way of doing things” or the “Nagios way of doing things.”  We want something that works upon install, that you can make a call to (ideally REST-based, though command line is acceptable in a pinch, and if there’s an iPhone app for it you get extra credit) in order to tell it what to do.  Each of these items is about 1/100 of everything that needs to go into a full working system, and so if I have to spend more than a week to get you working and integrate with you – it’s a dealbreaker.  You got away with that back when there weren’t other choices, but now in just about every sector there’s someone who’s figured out that ease of access and REST API for integration plus basic functionality is as valuable as loads of “function points” plus being hellishly crufty.

Heck, we ended up developing our own cloud management stuff because when we looked at the RightScales and whatnot of the world, they did a great job of managing the cloud providers’ direct APIs for you but didn’t then offer an API in return…  And that was a dealbreaker.  You can’t automate end to end if you come smacking up against a GUI.  (Since, RightScale has put out their own API in beta.  Good work guys!)

More and more, people are seeing that they need and want the “API way.”  If you don’t provide that, then you are effectively obsolete.  If I can’t roll up a new system – either with your software or something your software needs to be looking at/managing – and have it join in with the overall system with a couple simple API commands, you’re doing it wrong.

Leave a comment

Filed under Cloud, DevOps

Advanced Persistent Threat, what is it and what can you do about it – TRISC 2010

Talk by James Ryan.

An Advanced Persistent Threat is basically a massively coordinated long term hack attack, often accomplished by nation states or other “very large” organizations, like a business looking for intellectual property and information.  They try to avoid getting caught because they have invested capital in the break in and want to avoid the re-break in.  APTs are often categorized by slow access to data.  They avoid doing things rapidly to avoid detection.

Targets.  There is a question about targets and who is being targeted.  Anything that is crucial infrastructure is targeted.  James Ryan says that we are losing the battle.  We are now fighting (as a nation) nation states with an organized crime type of feel.  We haven’t really found religion to make security happen.  We still treat security as a way to stop rogue 17 year old hackers.

The most prevalent ways to engage in APT is through spear phishing with malware.  The attacker at this point is looking for credentials (key loggers, fake website, …).  Then damage by doing data exfiltration, data tampering, shutdown capabilities.  One other way to avoid getting caught is have the APT get hired in the company.

APT uses zero-day threats and sits on them.  They them it to stay on the network.

We should think that the APT is always going to be on our network and they are going to get there regularly.  We can avoid risk to APT by doing the following.

  • Implement PKI on smartcards, enterprise wide (PKI is mathematically proven to be secure for the next 20 years)
  • Hardware based PKI, not software
  • Implement network authentication and enterprise single sign on eSSO with PKI
  • Remote access tied to PKI keycard/smartcard
  • Implement Security Event Information Management and correlate accounts and run triggers on multiple simultaneous session trigger.  Also tie this with physical access control.
  • Implement PKI with privileged users as well (admins, power users)
  • Decrease access per person and evaluate and change
  • Create email tagging from external (avoid spear phishing)
  • Training and testing using spear phishing in the organization
  • Implement USB control to stop external USB
  • Background checks and procedures

James Ryan spent time talking about PKI and the necessity of using it.  I agree that we need to have better user management and if you operate on the assumption that Advanced Persistent Threat operators try to go undetected for a long amount of time and also try to get valid user credentials then it is even more so.  The thing that we need to do is control users and access.  This is our biggest vector.

Takeaways:

  • APT is real and dangerous
  • Assume network is owned already
  • Communicate in terms of business continuity
  • PKI should be part of the plan
  • Use proven methods for executing your strategy

Leave a comment

Filed under Conferences, Security

Understanding and Preventing Computer Espionage – TRISC 2010

Talk given at TRISC 2010 by Kai Axford from Accretive

Kai has delivered over 300 presentations and he is a manager at an IT solutions company.  Background at Microsoft in security.

Kai starts with talking about noteworthy espionage events:

  • Anna Chapman.  The Russian spy that recently got arrested.  Pulls up her facebook and linkedin page.  Later in the talk he goes into the adhoc wireless network she setup to transfer files to other intelligence agents.
  • Gary Min (aka Yonggang Min) is a researcher at DuPont.  He accessed over 22,000 abstracts and 16,706 documents from the library at DuPont.  He downloaded 15x more documents than anyone else.  Gary was printing the documents instead of transferring on a USB.  Risk to DuPont was $400,000,000.  He got a $30,000 fine.
  • Jerome Kerviel was a trader that worked in compliance before he started abusing the company. Stock trading and was using insider knowledge to abuse trading.

Cyberespionage is a priority in China’s five-year plan.  Acquire Intellectual Property and technology for China.  R&D is at risk from tons of international exposure.  Washington Post released a map of all top secret government agencies and private in the US.  http://projects.washingtonpost.com/top-secret-america/map/

Microsoft and 0-day from SCADA is another example.  SCADA is a very dangerous area for us.  2600 did a recent piece about SCADA.

Lets step back and take a look at the threat.  Insiders.  They are the ones that will take our data and information.  The insider is a greater risk.  We are all worried about the 17-year old kid in Finland, but it is really insiders.

There is a question of, if you gave your employees access to something, are they ‘breaking in’ when they access a bunch of data and take it home with them?

Types of users:

  • Elevated users who have been with the company for a long time
  • Janitors and cleaning crew
  • Insider affiliate (spouse, girlfriend)
  • Outside affiliate

Why do people do this?

  • Out of work intelligence operators world wide
  • Risk and reward is very much out of skew.  Penalties are light.
  • Motivators: MICE (Money, Ideology, Coercion, Ego)

In everyone that does espionage, there is a trigger.  There is something that makes it happen.  Carnegie-Mellon did some research stating that everyone who was stealing data had someone else that knew it was happening.

Tools he mentioned and I don’t know where else to mention them:

  • Maltego
  • USB U3 tool.  Switchblade downloads docs upon plugin.  Hacksaw sets up smtp and stunnel to send out all the docs outbound of the computer.
  • Steganography tools >  S-Tools.  This is what Anna was doing by putting info in images.
  • Cell phone bridge between laptop and network.
  • Tor

Mitigate using:

  • defense in depth
  • Background checks, Credit Check,
  • gates, guards, guns,
  • shredding and burning of docs
  • clean desk policy
  • locks, cameras
  • network device blocking
  • encryption devices
  • Application Security.
  • Enterprise Rights Management.
  • Data classification.

1 Comment

Filed under Conferences, Security

Mac OSX Forensics (and security) – TRISC 2010

This talk was presented by Michael Harvey from Avansic.

This has little to do with my day job, but I am big fan of Mac and have really enjoyed using it for the last several years both personally and professionally.  Security tools are also really great for the Apple platform, which I use using Mac Ports: nmap, wireshark, fping, metasploit… Enough about me, on to the talk.

There is a lot of objection about doing forensics on Macs and it is really needed, but in reality it is about 10% of the compute base and a lot of higher level officers in a company are using Macs because they can do whatever they want and aren’t subject to IT restrictions.

Collection of data is the most important.  In a Mac, just pulling the hard drive can be difficult.  Might be useful to pre-download the PDFs on how to do this.  You want to use a firewire write-blocker to copy the drive.  Live CDs (Helix and Raptor LiveCD) lets you copy the data and write block.  Michael really likes Raptor because of its support for legacy Macs and Intel based Macs.  In a follow-up conversation with him he emphasized how Raptor is great for people that don’t do forensics all the time.

Forensics cares about Modified, Accessed, Created time stamps.  Macs add on a time stamp called “Birth Time.”  This is the real created date.  Look at the file properties.  You can use SleuthKit (Open Source forensics tool) to assemble a timeline with M-A-C and Birth Time.

Macs use .plist files in lieu of the windows registry that most people are familiar with. “Property List” files.  These can be ASCII, XML and Binary.  ASCII is pretty rare these days for plist files.  Macs more often dont use the standard epoch unix time and instead uses Jan 1, 2001.  Michael is releasing information on plist format.  Right now there is not a lot of documentation on it.  Plist is more or less equivalent to the windows registry.

Two ways to analyze plist: plutil.pl and Plist Edit Pro.

Dmg files.  Disk images, similar to iso or zip files.  Pretty much a dmg file is crucial for using a Mac.  We can keep an eye out for past-used dmg files to know what has been installed or created…

SQLite Databases.  Lightweight SQL database.  This is heavily used by Firefox, iPhone, or Mac apps.  This is real common on Macs.

Email.  Email forensics will usually come in three flavors: MS Entourage (Outlook), Mail.app, and Mozilla Thunderbird.  A good tool for this is Emailchemy and is forensically sound.  It takes in all the formats.

Useful plist File Examples to look at for more info

  • Installed Applications: ~/Library/Preferences/com.apple.finder.plist
  • CD/DVD Burning: ~/Library/Preferences/com.apple.DiskUtility.plist
  • Recent Accessed Docuents, Servers, and Aplications: ~/Library/Preferences/com.recentitems.plist
  • Safari History: ~/Library/Preferences/com.apple.Safari.plist
  • Safari Cache: ~/Library/Preferences/com.apple.Safari/cache.db
  • Firefox: didn’t get this one

Forensic Software that you can use

  • AccessData FTK3
  • Mac Forensics Lab
  • Sleuth Kit (great timeline)
  • Others exist

In conclusion, Mac OSX Investigations are not that scary.  Be prepared with hard drive removal guides and how to extract data off of them.  The best forensic imaging tool out there should be chosen by hardware speed (and firewire), write-blocking capabilities, ability to use dual-core.  You need to know your tools handle HFS+, Birthed times, plist files, dmg files, SQLite Databases.

Audience member asked about harddrive copying tool.  Michael recommends Tableau (sp?).

Here are some resources:

  • Apple Examiner – appleexaminer.com
  • Mac Forensics Lab Tips – macforensicslab.com
  • Access Data – accessdata.com
  • Emailchemy – weirdkid.com/products/emailchemy

File Juicer.  Extracts info from databases used by browsers for cache.  Favicons are a good browser history tool…  You can point File Juicer at a SQLite Database or .dmg files.

Also, talking with Michael afterward ended with two book recommendations: SysInternals for Mac OSX and Mac OSX Forensics (unsure of title but it includes a DVD).
All in all, a really interesting talk and I look forward to seeing what else Mike produces in this arena.

Leave a comment

Filed under Conferences, Security

Pen Testing, DNSSEC, Enterprise Security Assessments – TRISC Day 1 Summary

Yesterday’s TRISC event had some great talks. The morning talks were good and were higher-level keynotes that, to be honest, I didn’t take good notes on. The talk on legal implications for the IT industry was really interesting. I was able to talk with Dr. Gavin Manes (a fellow Oklahoman) about legal implications of cloud computing and shared compute resources. In the old days, a lawyer was able to get physical access to the box and use it as evidence but it sounds like with the growth of SaaS that the courts don’t expect have to have physical box access but the law seems to be 5 to 10 yrs behind on this and it could backfire on us.

The three classes I attended in the afternoon are added below. Some of the notes are only partially complete, so take it for what it is: notes. Interspersed with the notes are my comments, but unlike my astute colleague Ernest, I didn’t delineate my comments with italics. So, pre-apology to any speakers if they feel like I am putting words there that they didnt say. If there are any incorrect statements, please feel free to leave a comment and I will get it fixed up, but hopefully I captured the sessions in spirit.

Breaking down the Enterprise Security Assessment by Michael Farnum

Michael Farnum did a great job with this session. If you want to follow him on twitter, his id is @m1a1vet and he blogs over at infosecplace.com/blog.

External Assessments are crucial for compliance and really for just actual security. We can’t be all about compliance only. One of the main premises of the talk is to avoid assumptions. Ways to do that in the following categories are below.

In Information Gathering check for nodes even if you think they don’t exist:

  • Web Servers. Everything has a web server nowadays. Router, check. Switch, check. Fridge, check.
  • Web Applications and URLs
  • Web app with static content (could be vulnerable even if you have a dummy http server). Might have apps installed that you didn’t even know (mod_php)
  • Other infrastructure nodes. Sometimes we assume what we have in the infrastructure… Don’t do that

In addition to regular testing, we need to remember wireless and how it is configured. Most companies have a open wireless network that goes just to the internet. The question that needs to be addressed in an assessment is: is it really segmented? For this reason we need to make sure that wireless has an IDS tied to it.

Basic steps of any assessments are identification and penetration. We don’t need to always penetrate if we have the knowledge of what we are doing but we do need to make sure that we identify properly.  No use in penetrating if you can show that the wireless node allows WEP or your shopping cart allows non-https.

Culture issues are also something that we need to watch out for. Discussing security assessments with Windows and Linux people generally ends with agreeable and disagreeable dialogs respectively when talking with contractors and vendors.

Doing Network Activity Analysis

  • Threat > malicious traffic – Actually know what the traffic is
  • Traffic > policy compliance – don’t assume that the tools keep you safe
    Applications
  • Big security assumptions. Not internally secured apps. Too much reliance on firewalls. CSRF, XSS and DNS Rebinding work w/o firewalls stopping them.
  • Browsers need to be in scope

What is up with security guys trying to scare people with social engineering? Michael says why bother doing social engineering if you don’t have a security training and awareness program. He guarantees you will fail. Spend the money elsewhere.

The Gap Analysis of physical security includes video and lighting. The operations team will probably hate you for it though. Getting into “their” area… Be careful when testing physical security (guards, cameras, fences) w/o involving physical ops team.

Reviews and interviews need to happen with developers, architecture team, security coverage, and compliance. At the end of an assessment, you need to do remediation, transfer knowledge with with workshops, presentations, documentation, and scheduling a verification testing to make sure it is fixed. While it makes more money to do point in time evaluation without follow-up (because you can do the same review next year and say, “yep, its still broken and you didn’t fix it) it is better to get your customers actually secure and verify that they take the next steps.

Actual Security versus Compliance Security.

DNSSEC: What you don’t know will hurt you by Dean Bushmiller

This talk was very interesting to me because of my interest with DNS and DNS Rebinding.  Dean passed out notes on this, so my notes are a little light, however I will see if I can post his slides here. But here are my notes for further research.

Read the following RFCs: DNS 1034, 1035 and DNSSEC 4033, 4034, 4035.

One of the big takeaways is that DNSSEC is meant to solve the integrity issues with DNS and does not solve confidentiality at all. It just verifies integrity.

All top-level domains are signed now, so when reading DNSSEC material online, ignore the island talk. A good site to check out is root-DNSSEC.org.

DNSSEC works by implementing PKI. One of the problems that people will face is key expiration. Screw that up and your site will be unavailable. Default is 30-day expiration period.

DNSSEC has a nonexistent domain protection. Subdomains are chained together in a circular logic and there is no way for a bad guy to add in a subdomain in the middle. This dumps all subdomains… All of them. All domains are enumerated and could make it easier for a malicious user to look at all your subdomains. They can already do this now, but this should prevent injection of bad subdomains into your domain.

An Introduction to Real Pen Testing: What you don’t learn at DefCon by Chip Meadows

What is a Penetration Test?

  • Authorized test of the target (web app, network, system)
  • Testing is the attempt to exploit vulnerabilities
  • Not a scan, but a test
  • Scanners like Saint and Nessus are part of a test but they are not the test, they are just a scan

Why Pen Test?

  • Gain a knowledge of the true security posture of the first
  • Satisfy regulatory requirements
  • Compare past and present

PCI is not the silver bullet. Doesn’t really keep us secure.

Chip had a lot of other points that mimicked Michael Farnum’s earlier talk and they have been redacted from here, but he did mention the following tools and link that are also great for security guys to check out.

Testing Tools

  • fling -ag ip add > Feed into scanner
  • Hydra
  • Msswlbruteforcer
  • ikescan
  • nikto
  • burpsuite
  • dir buster
  • metasploit
  • firewalk
  • burp suite

http://vulnerabilityassessment.co.uk/Penetration%20Test.html

Wrap up

The talk that I was most interested in was the DNSSEC talk, but the most useful talks for most people are the security assessments and pen testing talks.  I have been thinking about writing a talk on Agile Security and about how to integrate security with Agile development methods.  Look for that in the near future.

One other note,  I am testing my new setup made just for conferences. Well, I can use it for other things too, but I always worry about ‘open’ networks at hotels especially at security conferences. What I have done is setup dd-wrt on my home router with OpenVPN running on it as well.  From my laptop (Mac Pro) I run Tunnelblick and get a VPN connection back home.  This is cool because if someone is watching the traffic they will just see an encrypted stream from my laptop.  That way, I don’t have to worry about whether or not they have WPA or just a plain open connection.  All my traffic is encrypted at that point.  OpenVPN was a little difficult to get setup and I found a lot of conflicting documentation, let me know and maybe I can piece together some instructions for the blog.

1 Comment

Filed under Conferences, Security

TRISC 2010 – Texas Regional Infrastucture Security Conference

TRISC starts today. Only one of the Agile Admins is up in Dallas today, and there are some pretty good speakers lined up today with some really interesting talks.

I am looking forward to talks on DNSSEC, Pen Testing, and a talk from Robert Hansen.

Stay tuned for more TRISC coverage and in the interim, feel free to follow the coverage on my twitter account.

Leave a comment

Filed under Conferences, General, Security