Saturday, June 21, 2008

Clarification for "BK on Safari, hunting Firefox…"

Is Safari 3.12 affected by the vulnerability you mention in, BK on Safari, hunting Firefox?  The “carpet bomb” behavior COULD have been used in conjunction with Firefox to steal user files.  This specific scenario has been patched.

 

Can an attacker use other, non-obvious ways to abuse the Safari (3.12)/Firefox interaction to steal files from the local file system?  Yes, I know of three separate methods to accomplish this (Firefox 3 lessens the risk).  Vendors have been informed and no details will be provided to the public.  Don’t ask for additional details, I won’t give them until all this is straightened out.  

 

Whose fault is this?  That’s the whole point of the post.  We have interaction between different software from different vendors.  In isolation, the behaviors that are being abused here are not a high risk.  It’s only when you combine the behaviors does it constitute a risk.  Who should we blame?  I don’t know, I don’t think anyone really knows… lots of people have their opinions though. :)

 

 

 

Thursday, June 19, 2008

BK on Safari, hunting Firefox...

Apple released a patch for their “Carpet Bomb” issue today.  I’m glad to see that Apple took steps to protect their users.  Kudos to the Apple Security team!   

 

There was a lot of discussion about how this behavior could be used in a “blended” attack with IE, but Safari’s behavior affected more than just IE. In fact, I’ve discovered a way to use the Safari’s carpet bomb in conjunction with Firefox to steal user files from the local file system.  Even though Apple has patched the carpet bomb, I’m not going to go into details as the issue is not patched and the behavior may be replicated via other means (it’s the kinder, gentler BK).  I’m also happy to say that some of the improved security features in Firefox 3 help lower (but do not eliminate) the impact of the issue (Firefox 2 users could still be at risk of arbitrary file pwnage). Mozilla is working on the issue and they’ve got a responsive team, so I’m sure we’ll see a fix soon. 

 

  • UNREALTED NOTE TO MOZILLA:  Firefox 3 shouldn’t FORCE itself to be my default browser after I install it (YES, I unchecked the default browser checkbox during install)


 

Now, these types of vulnerabilities are a perfect example of how the all the software and systems we use are part of a giant ecosystem.  Whether we like it or not, the various parts of the ecosystem are intertwined with each other, depending on each other.  When one piece of the ecosystem gets out of line, it can have a dramatic effect on the ecosystem as a whole.  A small vulnerability or even an “annoying” behavior from one piece of software could alter the behavior of 2nd piece of software, which a 3rd piece of software is depending on for a security decision (The recent pwn2own browser -> java -> flash pwnage is a great example of this).  As the ecosystem grows via plugins, functionality, and new software, so does the attack surface.  Eventually, the interactions between systems and software become a gigantic mesh and the attack surface becomes almost infinite.

 

Now, a lot of people have criticized Apple for their inability to see the carpet bombing behavior as a security issue.  If Apple looked at their product (Safari) in isolation, maybe it wasn’t a high risk security issue to them and it was really more of an annoyance… its only when you look at the ecosystem as a whole do we start to see the security implications of this behavior.  Should we have expected Apple to threat model the risks of this behavior against their own products AND other third party products as well?  Can we reasonably expect them (or anyone) to have the requisite knowledge to truly understand how certain behavior will affect the ecosystem? 

 

This brings us to a pressing question.  In the "real world", users install products from multiple vendors.  Whose responsibility is it to examine the interaction between all these products?

Saturday, June 14, 2008

3rd Annual Symposium on Information Assurance


I was recently given the honor of delivering a keynote talk for the 3rd Annual Symposium on Information Assurance, which was held in conjunction with 11th Annual New York State Cyber Security Conference.  It was a great conference and I want to thank Sanjay Goel for inviting me!


 


The conference was VERY academic… which I love.  Academics present with an eye to the future so I listened as PHD candidates talked about securing nano networks, sensor based wifi networks and a slew of other topics… Academics also seem to have an boldness and fearless approach to the topics they present, which I admire…


 


While I enjoyed most of the talks I attended, there was one that perked the ears of the blackhat in me.  John Crain of ICANN gave a talk on “Securing the Internet Infrastructure: Myths and Truths”.  If you don’t know, ICANN basically owns the root DNS servers that the world relies on everyday.  He gave a great explanation of how ICANN goes about creating a heterogeneous ecosystem of DNS servers.  These DNS servers use multiple versions and types of DNS software, multiple versions and types of operating systems, and  even go so far as to use various pieces of hardware and processors.  The reasoning behind this logic is… if a vulnerability is discovered in a particular piece of software (or hardware) is discovered, it would only affect a small part of the entire root DNS ecosystem, whose load could be transferred to another.  It’s an interesting approach indeed.  After the talk, someone asked me why enterprises/corporations don’t adopt a similar strategy.  I thought about it some and I don’t think this approach could enterprise environment… here’s why (other than the obvious costs and ungodly administration requirements):


 


ICANNs interest is primarily based on preventing hackers from modifying a 45k text file (yes the root for the Internet is a ~45k text file).  Now, if a hacker happens to break into a root DNS server and modifies the file, ICANN can disable the hacked system, restore the file and go about their business.  As long as ICANN has a “good” system up somewhere, they can push all their traffic to that system.  Businesses on the other hand, aren’t not primarily interested in preventing the modification of data (not yet at least), they are more interested in preventing the pilfering of data.  So if you own a network of a million different configurations, a vulnerability in any one of those configurations could allow an attacker to steal your data.  Once the hacker has stolen your data, what does it matter that the 999,999 other systems are unhacked?  



This brings up the heart of the argument, should we be worried about our systems being compromised or should we be worried about our data being stolen?  These are actually two different problems as I don’t necessarily have to compromise your system to steal your data…