[A couple of weeks ago on the GPWN mailing list open to alums of SANS Pen Test courses, there was a discussion about attacking fat client, web apps, and mobile applications using Java Serialized Objects communicating with a back-end server. Miika Turkia posted a response to some questions there about an approach to altering the communications using the Burp Suite. I was so impressed with the response, I asked Miika to write a blog article containing tips for such penetration testing. The resulting article is below, containing a lot of ideas and tips for such pen tests that are much more efficient and powerful than the complete manual manipulation of hex that some pen testers and ethical hackers rely on. -Ed.]
By Miika Turkia
In this article, we will look at techniques for penetration testing applications that use serialized objects for communication. Ethical hackers and penetration testers can use this common technique with thick clients or Java applets and it is often seen in mobile applications as well. The same testing method can be applied against clients and servers. Furthermore, the technique is extensible in that it is not limited to serialized objects, but can be applied to test proprietary protocols as well.
Attacking Java serialized communication with the Burp extender module was introduced in BlackHat Europe 2010 by Manish S. Saindane. He had a module to modify the serialized object from an interactive IRB shell. This technique is much easier than the hex editor method commonly used before that time, but it is still quite slow and cumbersome.
We need a module that allows easy viewing and editing of the traffic and allows the use of the automated tools that PortSwigger's BurpSuite offers. This can be achieved by using Xstream library that de-serializes the traffic to XML format for viewing and modifying and then re-serializes it after modifications. Of course this method can be applied to any framework or custom code, but PortSwigger's excellent Burp Suite is well suited to web application testing and, because of its extender API, it works very well on testing serialized communication that is sent over HTTP/HTTPS connections.
Essentially, what we do is we grab the traffic before it is given to Burp's internal tools and modify it to suit our purpose as penetration testers. Our modifications could involve Base64 decoding, Java de-serialization, or translating some proprietary binary protocol to human-readable format. Then, the message is passed to BurpProxy and can be modified and sent to other tools of the suite. Once we are done with the clear-text data and are ready to send it to the server or fat client, we re-encode it to the appropriate format. In this case, we re-serialize it to Java serialized object and send away. This approach works well with Burp's Intruder and Scanner also, as long as the XML structure is kept syntactically correct in our Xstream-based serialization context.
The following graph shows the work flow from client to server and back. As shown, the testing can be done interactively on messages from the client to/from server or by injecting new messages on the flow for fuzzing or manual testing. The bold and italic items below indicate where the testing activity itself occurs:
Fat client <--> Xstream de-serialization <--> human/automatic tampering <--> Xstream re-serialization <--> server
\----------- Intruder/Repeater/Scanner ----------/
Here are screenshots (from a simple test application) showing the view the penetration tester would see. First, there is the traditional method of examining and modifying the Java serialized data in HEX. This often leads to incorrect length fields or other problems. As one can imagine, it is far from pen tester friendly, leading usually to quite shallow test coverage.
Here is the interface after applying a de-serializing Burp extender module and configuring automated fuzzing on Burp's Intruder interface. Fuzzing is performed with text-based injections on the string field and numerical payloads on integer field.
A requirement for the de-serialization to work properly is access to the application JAR files. This is because we need to know the structure of the internal objects/classes that are sent over the network in order to de-serialize them. When using Java WebStart, obtaining these Java classes is easy, as they are transferred over the network the same way as all other data. Newly encountered Java classes can be loaded to our extender module on-the-fly easing up the testing process.
MITMing the traffic
Some applications are easy enough as they trust either the web browser's or Java's proxy configuration and either don't use SSL or don't verify the server certificate. In some cases, it has been possible to trick the application to use HTTP instead of HTTPS (and proper server certificate validation) by modifying the application parameters on a JNLP file. This way there is no server certificate to verify and we are all set (I suppose this is common behaviour with the standard APIs even though the developers try to use SSL properly).
When the application enforces the use of SSL and validates the server certificate appropriately, we usually can add our own CA to Windows, Java or smart phone Certificate Storage and are good to go. One obvious method is to ask target system personnel for a valid, trusted CA-signed, certificate. If they want their implementation tested in depth, they usually are willing to support the testing this way.
A last resort is to tamper with the application and modify it to allow what we need. With Java WebStart applications, this is a bit too easy as we can drop classes from the signed applet and Java is willing enough to search for them from the Internet (forgetting to validate the substitute classes). Otherwise hooking function calls or binary patching the application is possible (but so far the standard certificate based approaches have been enough in our experiences).
Now that we've handled fooling the client to trust us, we still have to cover how to actually get the traffic to our proxy tool in cases where the fat client does not trust the proxy configurations. With mobile apps, we tend to set our laptop as an access point and use iptables to transfer the traffic to Burp (in invisible proxy mode). With desktop computers, a laptop with two NICs and similar iptables configuration and DHCP server has usually been sufficient.
The toughest target we've faced so far has been a payment terminal that was hooked with a cross-over cable to a cash register with static IPs configured on both ends, along with a proprietary communications protocol (and total silence towards the network apart from a self-initiated SSL connection). This situation required my laptop to be configured as a bridge doing some heavy iptables kung-fu to MITM the traffic properly so that all traffic seemed legitimate on both ends. The protocol needed custom decoding for viewing and recalculating checksums and length fields during re-encoding after modifications. Also the testing required a lot of manual labor and visually inspecting the payment terminals behavior after each attack. But, the overall approach described above was the same.
The juicy stuff
In some cases, being able to even perform the MITM attack has been a small victory. However, this is only a pre-requirement for pen testing. With the infrastructure in place and the extender module more or less working, I have discovered quite a bit of interesting vulnerabilities, which you should also search for when conducting such tests, including:
- Very often, the client does not validate server certificates
- Client certificates are usually not used
- Some clients can be dropped to unencrypted traffic by simply modifying target URLs on JNLP configurations
- Broken authentication - user ID is sent along with requests and can naturally be modified on transit
- Insecure Direct Object References (IDOR), e.g., accessing other people's data
- Missing access control allows unauthorized modification of data (just change the IDOR on update request)
- GUI restrictions are trusted - crashes and stack traces are all over when modifying "unmodifiable" fields
- The information sent back to the client can contain a lot more than the user is actually shown. Even the whole dataset can be transferred and fat client displays only the "user accessible" part
- Path traversal on retrieving PDF files gave me even a web server's private key for SSL certificate along with a bunch of password containing configuration files
- SQL injection - seeing a raw SQL query in serialized communication is practically a guarantee of vulnerability and should be vigorously exploited if allowed in the rules of engagement and scope
Here are some tips and tricks for getting the most out of the Burp extender module:
- Use the registerMenuItem method of Burp Extender API to implement loading of the newly obtained Java classes on-the-fly
- Similarly you can reload the extender module itself at least if it is implemented in Python or Ruby
- Inject HTTP header to the message when de-serializing to indicate it must be re-serialized before sending along
- Use the action ACTION_DO_INTERCEPT_AND_REHOOK when processing a response that is to be viewed by a proxy and re-serialized only after that
Testing server implementations that are using fat clients can be even more rewarding than traditional web application pen testing. A notable difference to traditional web applications is that the trust in client restrictions and validations is usually taken one step further. The client is trusted to provide valid data and to hide the data the user should not be able to access.
Common vulnerabilities discovered during testing fat client applications utilizing serialized data communication are surprisingly well covered by the OWASP Top Ten project. A lot of the vulnerabilities fall into business logic category but server compromises are also what we aim for, usually with success.