Tag Archives: lascon

LASCON 2013 Report – First Afternoon

We move into the afternoon of LASCON. The vendor room was all abuzz, complete with lockpicking village.

IMG_1477IMG_1478

Stupid Webappsec Tricks

Zane Lackey, Security Engineer Manager from Etsy (@zanelackey)

XSS

Data driven security – look at your data instead of using your presuppositions about how attacks work.

Overwrite common methods but only phone home on interesting payloads.

8477 XSS attempts with mostly alert(), prompt(), confirm() (or multiples thereof). The payloads are mostly what you’d expect, “XSS,” document.cookie, integers (from scanners). Note you can’t match on “document.cookie” because it’ll already be expanded, so look for your domains, unique cookies, etc.

What else detects XSS well?  Chrome’s XSS Auditor. Works great.  But it can defend the user but doesn’t fix the XSS.

Server side attempt –

  1. Scan input for HTML esscapes/tag creation.
  2. If found, set flag to true and create array of hostile input.
  3. At output time, check flag, see if any hostile input is being output as valid HTML.
  4. If hostile input is being output, alert!

Need to fail open, stripping will break your app… And it should take you 20 minutes to push to production so detect to fix is a short path!

SQL Injection

These are attack chains that can be instrumented. Detection step then exploit step.

Alert on SQL syntax errors showing up in your application today. It’s a bug even if it’s not an exploit.

Watch logs for unique sensitive db table names in requests.  Occasional false positives are OK.

A SQL injection exploit response will be huge sized, often larger than is normal, detect that. Whitelist stuff that is supposed to give huge responses.

The more alerts you have in an attack chain the more visibility you have, but false positives happen. But if it’s happening in order down the chain, it’s probably not false.

“Temporary” debug stuff is permanent. How do you find this automatically? Access logs.

Map access logs to code paths. Endpoints that don’t get requests are anomalous. Alert off it then go take it out.

Attacker Trix

Cheapest way to find webapp vulns – Automation. Your best attackers are doing it manually anyway, but may as well beat out the kiddies. Break off-the-shelf scanners. They give off strong detection signals. User agents, request patterns, requests for stuff that doesn’t exist (*.asp or php on a Java site, for example).

Blocking IPs is easy but dangerous. You’ll break lots of legit things. IPs are not a strong correlation to identity.

  1. Classify a request as being from a scanner
  2. If yes, weight based on confidence
  3. Feed request into rate limiter (see Nick G’s rate limiting at scale talk) and drop if above threshold. They return a 439 “Request Not Handmade” 🙂

This doesn’t impact browsing but does scripting. Set your thresholds high; allows for false positives but a scanner will definitely peg it.

Be ready for the weirdness that is the Internet! Tried auto-banning accounts that do scanning. They saw 437 scanners over the last week and only 10 were authenticated and 5 were false positives. Browser plugins is our guess. So don’t auto-ban.

Attacks don’t always happen like you’d expect.  Look at the data before you make decisions. Get the instrumentation you need to make those decisions.

“Run a bug bounty program and the Internet shows up!”

And of course you can then insert false data sets to screw with people and increase the cost of attack.

We don’t run scanners of our own because it’s a time sink and requires manual babysitting. We have taken WAF concepts and build them into the apps; since we deploy 30x/day we don’t need the “coverage in the meanwhile” functionality they provide.

Stalking a City for Fun and Frivolity

By Brendon O’Connor, CTO of Malice Afterthought and law student. About CreepyDOL wifi surveillance. He was wearing a kilt and started out by telling us we’d “lost the mandate of heaven.” Why is this? Well…

Everything leaks too much data. Privacy has been disregarded. Fundamental changes are needed to fix this. We need to democratize security – the government is the worst way to do this.

Especially the case of the US persecuting legitimate security researches like Weev for doing things like accessing public information on Web sites.

Wireless.  Your devices advertise networks they know for all our convenience. His little doodads find your probe list of wifi locations and gps location. Now we need a distributed way of doing this on a large scale with no centralized control. Academic sensor networks are kinda like this, but expensive. Hence, the F-BOMB hardware gizmo.

Raspberry Pi based, 5W, $57.08. Uses connection to municipal wifi to phone home, with automatic portal-clickthrough. Reticle, leaderless command and control software. Uses TOR to go out.

CreepyDOL is distribute computation for distributed systems. Want to digest on the nodes to minimize net traffic. Centralized querying for centralized questions only.  Filters include Nosiness, Observation, and Mining. Visualization using Unity (the game engine). Oh look, you can see a map mashup of people wandering around and click on them and find their name and other useful info.

Bottom line is that all these technologies leak info about you like it’s going out of style and it’s pretty simple to get Orwellian levels of visibility on you for one low price.

Gauntlt

I missed this in favor of the next talk; I’ve seen about a dozen gauntlt presentations over time since I know James, but here’s the slides! Integrate security into your CI pipeline you freaks!

Penetration Testing: The Other Stuff

David Hughes, OWASP Austin president and Red Team analyst for GM.

This started as being about organizational skills… It’s general tips on making your life as a pen tester easier.

  • Clients aren’t always right about their environment and scope creep can happen.
  • Don’t assume you’ll have Internet, there’ll be proxies…
  • Prep your tools and do updates and test it ahead of time.
  • Rehearse your toolchain
  • Title your terminals
  • Use mind maps (Freemind), outline tools (NoteCase Pro) to organize tools, systems
  • HTTP-Screenshot module does screenshots as nmap scans
  • Use output options or pipe to a file
  • Reporting – keep organized, do it as you go, use ASCIIdoc to take text to pdf
  • Do things the easy way – look for low hanging fruit. DEfault credentials, bad passwords, cleartext, social engineering, dumpster diving, open wireless. Easy stuff is higher risk and the client cares more than esoteric crap.
  • Don’t rush recon, look for clues, broken windows
  • Have a plan (PTS framework) but range off as needed
  • Protect your customer’s data
  • Encrypt your stuff
  • Have backups
  • Learn and use a scripting language
  • Don’t rub it in with the client
  • Get involved with the community!

And that’s everything but the drinking… Time for happy hour and the mechanical bull!

Here’s some pictures of the volunteers hard at work, the speakers’ green room (there were chair massages there in the afternoon!), and organizer Josh Sokol with Robert “RSnake” Hansen!

IMG_1479 IMG_1480 IMG_1476

Leave a comment

Filed under Conferences, Security

LASCON 2013 Report – First Morning

IMG_1475Arriving at #LASCON 2013, hosted as usual at the Norris Conference Center, the first thing you see is the vintage video games throughout the lobby! As usual it’s well run and you get your metal badge and other doodads without any folderol; volunteers packed the venue ready to help folks with anything. I got a lovely media badge since I’m on the hook to blog/tweet it up while I’m there! It’s in a nice central location on Anderson Lane so getting there took a lot less time than my normal commute to work did.

IMG_1481The MCs, James Wickett and David Hughes, got us kicked off. Thanks went out to many the LASCON sponsors!

  • White Hat
  • Qualys
  • Gemalto
  • Trustwave/Spider Labs
  • Critical Start
  • Sourcefire
  • SOS Security

IMG_1482Then everyone stood and raised their right hand to say the “LASCON pledge,” which consists of “I will not hack the Wi-fi,” “I will not social engineer other attendees and the nice Norris Conference Center staff who are hosting us,” and similar.

Then, the keynote!

Keynote- Nick Galbreath, The Origins of Insecurity

IMG_1488Nick Galbreath (@ngalbreath), VP of Engineering at Iponweb. He used to work for Etsy, now he works in Tokyo for a Russia-based ad infrastructure company.  Suck that, Edward Snowden.

Slides at speakerdeck.com/ngalbreath!

If you’re in security, you should be bringing someone else from dev or ops or something here! We can’t get much done by ourselves.

Crypto

There’s a lot of consternation about crypto and SSL and PKI lately. The math is sound!  See FP’s “The NSA’s New Code Breakers” – it’s way easier to get access other ways. I don’t know of any examples of brute forcing SSL keys – it’s attacking data at rest or bypassing it altogether.

But what about the android/bitcoin break and alleged fix re: Java SecureRandom PRNG? I can’t find the fix checked in anywhere.  Let’s look at SHA1PRNG. Where’s the spec? You’re forced to use it, where’s the open implementation, tests…

Basically everything went wrong in specification, implementation, testing, review, postmortem… Then the NIST’s Dual-EC-DRBG spec – slow and with a potential backdoor – but at least it’s not required by FIPS!  It’s broken but not mandatory and we know it’s broken, so fair enough. It’s a “standard turd.” Standards aren’t a replacement for common sense. Known turdy in 2007.  Why are you just removing it now? TLS 1.2 was approved in 2008, why don’t all browsers support it and no browsers support GCM mode? Old standards need augmentation and updates.

Fixing the CA system – four great ways, certificate pinning, pruning, HTTPS Strict-Transport-Security, certificate-transparency.org.

Everything Else

  • Network Security – stuff you didn’t write
  • App Security – stuff you did write
  • Endpoint Security – stuff you run

IT internal tech is mostly Windows/Mac CM and patching, 99% C-based stuff.

Tech Ops – Routers, Linux, Core server (all C too)

Dev:

  • Input validation – not hard
  • Configuration problems
  • Logical problems – more interesting
  • Language platform problems (most patches here also in C!)

Reactive work is patching, CM, fixing apps, patching infrastructure. You can focus your patching though – Win7 at current patches, Flash, Adobe, Java will get 99% of your problems, focus there – but it’s hard to do. But either you can do it trivially or it’s really hard.

Learn from the hardest apps to deploy.  The Chrome model of self updating gets 97% of people within a version in 4-6 weeks. Android, not so good- driven more by throwing out phones than any ability to upgrade. They’re chipping stuff away from the OS and making more into apps to speed it up. Apple/iOS just figured out app auto-update. Desktop lags though. WordPress is starting background updates. BSD is automatically installing security updates at first boot.

Releasing faster and safely is a competitive advantage AND makes you more secure.

For desktop upgrades, can’t we do something with containers? Why only one version installed? How can we find out about problems from users faster? How do we make patching and deployment easy for the dumbest users?

Even info on “How do I configure Apache securely” is wide and random on the Web. Silently breaks all the time, and it’s simple compared to firewalls, ssh, VPN, DNS… Rat’s nests full of crap, while it gets easier and easier to put servers on the internet. How can we make it safe to configure a server and keep it secure?

Can we do this for application development? Ruby BrakeMan is great, it does static analysis on commit and sends you email about rookie mistakes. Why not for apache config? (Where did chkconfig go?)

PHP Crypt – great for legacy passwords and horrible for new ones. Approximately 0% chance of a dev getting its configuration right.

See @manicode’s best practices – have a business level API for that.

By default, every language has a non-crypto, insecure PRNG. So people use them. They are used for some science stuff, but seriously if you’re doing physics you’re going to link something else in. Being slightly slower for toy apps that don’t care about security isn’t a big deal. Make the default PRNG secure! And, there’s 100x more people interested in making things fast than making them secure, so make the default language PRNG secure and people will make it faster.

libinjection.client9.com to try to eliminate SQL injection! It’s C, fast, low false positives, plug in anywhere.

Products focus on blocking and offense/intrusion, but leave these areas (actual fixing) uncovered.  Think globally, act locally. Even if you’re not a dev, most open source doesn’t have a security anything – join in!
Write fuzzers, compile with different flags, etc.

So think big, get involved, bring your friends!

Malware Automation

IMG_1490By Christopher Elisan from RSA, aka @tophs.

Total discovered malware is growing geometrically year over year. There are a lot of “DIY malware creation kits” nowadays; SpyEye, Zeus… These are more oriented around online crime; the kits of yesteryear were more about pissing contests about “mine is better than yours” (VCL, PS-MPC). The variation they can create is larger as well.

Armoring tools exist now – PFE CX for example, claims to encrypt, compress, etc. your executable – but all the functions don’t always work and buyers don’t check.  Indetecitbles.net is online and will do it! It was free but now it’s “hidden.”

Use a tool like ExeBundle to bundle up your malware and then share it out via whatever route (file sharing, google play, whatever). Or hacking and overwriting good wares – even those that bother publishing a hash to verify their software often keep it on the same Web site that is already getting hacked to change the executable, so the hash just gets changed too.

So you make your malware with a kit, put it through a crypter a realtime packer, an EXE binder, other armoring tools, then run through QA in terms of on premise and cloud AV, then you’re ready to go.

Targeted vs opportunistic attacks… Delivery is a lot easier when you can target.

Anyway, many of those new malware samples are really just the same core malware run through a different variety of armoring tools. They’re counted as different malware but should get grouped into families; he’s working on that at RSA now.

Besides the variation in malware, domains serving malware can rotate in minutes. Since the malware can be created so quickly it effectively defeats AV by generating too many unique signatures. Reversing has to be done but it takes weeks/months.

Demo: Creating Malware in 2 Minutes!

ZeuS Builder – bang, bot.exe, one every couple seconds. Unique but not hash-unique at this point. They look different on disk and in memory. Then runs Saw Crypter, in seconds it creates multiple samples from one ZeuS sample. Bang, automated generation of billlllllyuns of armored samples.

There’s really just a handful of kits behind all the malware, need new solutions that go after the tools and do signature-less detection.

From Gates to Guardians: Alternate Approaches to Product Security

IMG_1493Jason Chan, Director of Engineering from Netflix, in charge of security for the streaming product. Here are his slides on Slideshare!

Agile, cloud, continuous delivery, DevOps – traditional security doesn’t adapt well to these. We want to move fast and stay safe at Netflix.

The challenges are speed (rapid change) and scale. To address these…

  • Culture – If your culture has moved towards rapid delivery, it’s innovation first. Don’t be “Doctor No” and go against your company culture, you won’t be successful.  Adapt.
  • Visibility – you need to be able to see whats going on in a big distributed system.
  • Automation – no checklists and spreadsheets

At Netflix we do ~200+ pushes to production a day, 40M subscribers, 1000+ devices supported.

Culture

We have a lot of stuff on our site about this, it’s a big differentiator.  “Freedom and responsibility” is the summary. No buck passing. Responsible disclosure program externally.

We’re moving towards “full stack engineers” that know some about appsec, online operations, monitoring and response, infrastructure/systems/cloud – that can write some kind of code. The security industry seems to be moving towards superspecialists, we don’t see that as successful.

2 week sprint model, JIRA Scrum workflow (CLDSEC project!). No standups, weekly midsprint meeting. Bullpen shared-space model.

Visibility

Use their internal security dashboard (VPC, crypto, other services plug in and display their security metrics). Alerts send emails with descriptive subjects, the alert config, instructions/links as to where to check/what to do. Chat integration.

NSA asks, how do you verify software integrity in production?  How do you know you’re not backdoored?

They have their Mimir dashboard that is a CI/CD dashboard, that tracks source code to build to deploy to JIRA ticket. Traceability!

Canary testing because code reviews don’t catch much.  Deploy a new version and test it (regression, perf, security) and see if it’s OK. Automatic Canary Analyzer gets a confidence level – “99% GO!”

Simian Army does ongoing testing. Go to prod… Then the monkeys test it.

Security Monkey shows config change timestamps of security groups and stuff.

So they have Babou (the ocelot from Archer) that does file integrity monitoring. They use the immutable server pattern so checking is kinda easy, but you still can be running multiple canary versions at the same time so there’s not one “golden master.” This allows multiple baselines.

Q: How long did it take to make this change and implement? What were the triggers?
A: This push started when he started in 2011; previously IT security handled product security. He hired his first person last year and now they’re up to 10.

Q: What do you do earlier on in the lifecycle in arch and design (threat modeling etc.)?
A: Can’t be automated, the model here is optionally come engage us (with more aggressiveness for stuff that’s clearly sensitive/SOXey).

Q: So this finds problems but how do people know what to do in the first place, share mistakes cross teams?
A: As things happen, added libraries with training and documentation. But think of it as “libraries.”

Q: Competing with Amazon while renting their hardware? (Laaaaaame, the CEO has talked about this in multiple venues.)
A: AWS is the only real choice. Our CEOs talked.

Next – Lunch!  No liveblog of lunch, you foodie voyeurs!

Leave a comment

Filed under Conferences, Security

LASCON 2010: Why The Cloud Is More Secure Than Your Existing Systems

Why The Cloud Is More Secure Than Your Existing Systems

Saving the best of LASCON 2010 for last, my final session was the one I gave!  It was on cloud security, and is called “Why The Cloud Is More Secure Than Your Existing Systems.”  A daring title, I know.

You can read the slides (sadly, the animations don’t come through so some bits may not make sense…).  In general my premise is that people that worry about cloud security need to compare it to what they can actually do themselves.  Mocking a cloud provider’s data center for not being ISO 27001 compliant or having a two hour outage only makes sense if YOUR data center IS compliant and if your IT systems’ uptime is actually higher than that.  Too much of the discussion is about the FUD and not the reality.  Security guys have this picture in their mind of a super whizbang secure system and judge the cloud against that, even though the real security in the actual organization they work at is much less.  I illustrate this with ways in which our cloud systems are beating our IT systems in terms of availablity, DR, etc.

The cloud can give small to medium businesses – you know, the guys that form 99% of the business landscape – security features that heretofore were reserved for people with huge money and lots of staff.  Used to be, if you couldn’t pay $100k for Fortify, for instance, you just couldn’t do source code security scanning.  “Proper security” therefore has an about $1M entry fee, which of course means it’s only for billion dollar companies.  But now, given the cloud providers’ features, and new security as a service offerings, more vigorous security is within reach of more people.  And that’s great -building on the messages in previous sessions from Matt’s keynote and Homeland Security’s talk, we need pervasive security for ALL, not just for the biggest.

There’s more great stuff in there, so go check it out.

1 Comment

Filed under Cloud, Conferences, Security

LASCON 2010: HTTPS Can Byte Me

HTTPS Can Byte Me

This paper on the security problems of HTTPS was already presented at Black Hat 2010 by Robert Hansen, aka “RSnake”, of SecTheory and Josh Sokol of our own National Instruments.

This was a very technical talk so I’m not going to try to reproduce it all for you here.  Read the white paper and slides.  But basically there are a lot of things about how the Web works that makes HTTPS somewhat defeatable.

First, there are insecure redirects, DNS lookups, etc. before you ever get to a “secure” connection.  But even after that you can do a lot of hacking from traffic characterization – premapping sites, watching “encrypted” traffic and seeing patterns in size, get vs post, etc.  A lot of the discussion was around doing  things like making a user precache content to remove noisiness via a side channel (like a tab; browsers don’t segment tabs).  Anyway, there’s a lot of middle ground between “You can read all the traffic” and “The traffic is totally obscured to you,” and it’s that middle ground that it can be profitable to play in.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Tell Me Your IP And I’ll Tell You Who You Are

Tell Me Your IP And I’ll Tell You Who You Are

Noa Bar-Yosef from Imperva talked about using IP addresses to identify attackers – it’s not as old and busted as you may think.  She argues that it is still useful to apply IP intelligence to security problems.

Industrialized hacking is a $1T business, not to mention competitive hacking/insiders, corporate espionage…  There’s bad people trying to get at you.

“Look at the IP address” has gotten to where it’s not considered useful, due to pooling from ISPs, masquerading, hopping… You certainly can’t use them to prove in court who someone is.

But… home users’ IPs persist 65% more than a day, 15% persist more than a week.  A lot of folks don’t go through aggregators, and not all hopping matters (the new IP is still in the same general location).  So the new “IP Intelligence” consists of gathering info, analyzing it, and using it intelligently.

Inherent info an IP gives you – its type of allocation, ownership, and geolocation.  You can apply reputation-based analytics to them usefully.

Geolocation can give context – you can restrict IPs by location, sure, but also it can provide “why are they hitting that” fraud detection.  Are hits from unusual locations, simultaneous from different locations,  or from places really different from what the account’s information would indicate?  Maybe you can’t block on them – but you can influence fuzzy decisions.  Flag for analysis. Trigger adaptive authentication or reduced functionality.

Dynamically allocated addresses aren’t aggregators, and 96% of spam comes from them.

Thwart masquerading – know the relays, blacklist them.  Check accept-language headers, response time, path…  Services provide “naughty” lists of bad IPs – also, whitelists of good guys.  Use realtime blacklist feeds (updated hourly).

Geolocation data can be obtained as a service (Quova) or database (Maxmind). Reputation data is somewhat fragmented by “spammer” or whatnot, and is available from various suppliers (who?)

I had to bail at this point unfortunately…  But in general a sound premise, that intel from IPs is still useful and can be used in a general if not specific sense.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Mitigating Business Risks With Application Security

Mitigating Business Risks With Application Security

This talk was by Joe Jarzombek, Department of Homeland Security.  Normally I wouldn’t go to a management-track session called something like this, when I looked at the program this was my third choice out of all three tracks.  But James gave me a heads up that he had talked with Joe at dinner the previous night and he was engaging and knew his stuff, and since there were plenty of other NI’ers there to cover the other sessions, I took a chance, and I wasn’t disappointed!

From a pure “Web guy” standpoint it wasn’t super thrilling, but in my National Instruments hat, where we make hardware and software used to operate large hadron colliders and various other large scale important stuff where you would be very sad if things went awry with it, and by sad I mean “crushed to death,” it was very interesting.

Joe runs the DHS National Cyber Security Division’s new Software Assurance Program.  It’s a government effort to get this damn software secure, because it’s pretty obvious that events on a 9/11 kind of scale are more and more achievable via computer compromise.

They’re attempting to leverage standards and, much like OWASP’s approach with the Web security “Top 10,” they are starting out by pushing on the Top 25 CWE (Common Weakness Enumeration) errors in software.  What about the rest?  Fix those first, then worry about the rest!

Movement towards cloud computing has opened up people’s eyes to trust issues.  The same issues are relevant to every piece of COTS software you get as part of your supply chain!  It requires a profound shift from physical to virtual security.

“We need a rating scheme!”  Like food labels, for software.  They’re thinking about it in conjunction with NIST and OWASP as a way to raise product assurance expectations.

He mentioned that other software areas like embedded and industrial control might have different views on the top 25 and they’re very interested in how to include those.

They’re publishing a bunch of pocket guides to try to make the process accessible.  There’s a focus on supply risk chain management, including services.

Side note – don’t disable compiler warnings!  Even the compiler guys are working with the sec guys.  If you disable compiler warnings you’re on the “willful disregard” side of due diligence.

You need to provide security engineering and risk-based analysis throughout the lifecycle (plan, design, build, deploy) – that generates more resilient software products/systems.

  • Plan – risk assessment
  • Design – security design review
  • Build – app security testing
  • Deploy – SW support, scanning, remediation

They’re trying to incorporate software assurance programs into higher education.

Like Matt, he mentioned the Rugged Software Manifesto.  Hearing this both from “OWASP guy” and “Homeland security guy” convinced me it was something that bore looking into.  I like the focus on “rugged” – it’s more than just being secure, and “security” can seem like an ephemeral concept to untrained developers.  “Rugged” nicely encompasses reliable, secure, resilient…  I like it.

You can do software assurance self assessment they provide on their Web site to get started.

It was interesting, at times it seemed like Government Program Bureaucratese but then he’d pull out stuff like the CWE top 25 and the Rugged Software Manifesto – they really seem to be trying to leverage “real” efforts and help use the pull of Homeland Security’s Cyber Security Division to spread them more widely.

Leave a comment

Filed under Conferences, Security

LASCON 2010: Why ha.ckers.org Doesn’t Get Hacked

Why ha.ckers.org Doesn’t Get Hacked

The first LASCON session I went to was Why ha.ckers.org Doesn’t Get Hacked by James Flom (who with rsnake is ha.ckers.org).  By its nature, it gets like 500-1000 hack attempts a week, but they’ve kept it secure for six years now.

From the network perspective, they use dual firewalls running the openBSD open source pf, which does Cisco-style traffic inspection.  Systems inside have no egress, and they have the user traffic and admin traffic segmented to different firewalls  sets and switches.

On the systems, they use chroot jails mounted read only.  Old school!  Jails are virtualization on the cheap, and if combined with a read only filesystem, give you a single out of band point of update, and you can do upgrades with minimal downtime.  They monitor them from the parent host.

Rsnake has done a whole separate presentation on how he’s secured his browser – the biggest attack vector is often “compromise the browser of an admin” and not direct attack on the asset.

They went to WordPress for their software – how to secure that?  Obviously code security’s a nightmare there.  So they set up a defense in depth scheme where they check the source ip, cert, and user/pass auth at the firewall, then to admin proxy check source IP, path, htaccess user/pass, and finally do the app auth.

Other stuff they do:

  • Secure logging to OSSEC – pflogd, waf logs, os logs, apache logs, parent logs, it goes off host so it’s reasonably tamper-proof
  • On-host WAF – custom, more of a “Web IDS” really, which feeds back “naughty people” to the firewall for blocking
  • For Apache – have your content owned by a different user, in their case there’s not even a user in the jail that can write to the files.
  • Use file ACLs, too.

Use case – they found an Apache flaw, reported it, and as is too often the case, Apache didn’t care.  So they modded their pf to detect the premise of the attack and block it (not just the specific attack).  (Heard of slowloris?)

Their ISP has been an issue – as they’ve moved their ISPs have shut them down out of cluelessness sometimes (Time Warner Business Class FTL).

They are moving to relayd for load balancing and SSL.  The PCI rule about “stay encrypted all the way to the box” is dumb, because it would prevent them from doing useful security inspection at that layer.

A good talk, though sadly a lot of the direct takeaways would mean “go to FreeBSD,” which I would rather not do.  But a lot of the concepts can port to other OSes and pure virtualization/cloud scenarios.  And note how joining network security, OS security, and appsec gets you way more leverage than having “separate layers” where each layer only worries about itself.n

And may I just say that I love how Apache can be run “read only” – sadly, most software, even other open source software like Tomcat, can’t be.  It all wants to write down into its config and its running directories itself, and it’s a horrible design practice and security risk.  If you’re writing software, remember that if it’s compromised and it can write to its own exes/config/etc. you’re owned.  Make your software run on a read only FS (with read/write in /tmp for stuff acceptable).  It’s the right thing to do.

Leave a comment

Filed under Conferences, Security