HBGary Rootkits: Catch Me If You Can!

Thursday, March 24, 2011

Pascal Longpre

43559f6a0465c923b496a260211995c0

Analyzing documents leaked from the Anonymous attack on HBGary sheds light on interesting technical aspects of rootkit development.

Numerous rootkit technologies are discussed, some of which might have never seen the light but could have well been developed by knowledgeable attackers.

They are designed to evade or bypass mainstream detection software and in some cases, circumvent protections thought to be unbreakable by design. Here is a list of the different rootkit technologies discussed and how we think they can be mitigated.

Hide in Plain Sight Rootkit

This basic rootkit detailed in a Feb 2010 email installs a driver on the target computer without hiding its files or registry entries. The reason being that hiding might raise suspicion by rootkit detection tools and personal firewalls.

By using a simple driver file named like a normal Windows file and by using legitimate API calls, this rootkit can effectively conceal its presence on the system and be very hard to locate for the untrained eye.

Mitigation: Since all the rootkit files are visible, it is possible to run the list of installed files against different data sources to confirm they are legitimate. Some of these sources are:

  • Known Good Hash Databases: The NIST NSRL and Bit9 File Advisor online lookup service are good sources to help remove the noise created by known good entries and to help focus on unknown entries.
  • Environment Correlation: By looking for the prevalence of a file within a given environment, unique files like those used in targeted attacks and by polymorphic malware can be quickly identified, especially in a homogeneous corporate environment.
  • Code Signing Validation: Most major vendors "stamp" their release files with a code signing certificate so it is possible to validate the integrity of the files and their creator. Although this validation technique has its weaknesses, as shown by the Stuxnet worm that used a stolen certificate, it can be of great help when combined with other validation techniques.

The "Catch Me if You Can" Rootkit

Detailed extensively in the press under the names of "12 Monkeys" or "Magenta", this rootkit has the main characteristic of constantly moving in memory in order to make the analysis process harder for an investigator.

Also, it doesn't communicate through standard command and control (C&C) channels.  Instead, it uses blended HTTP traffic disguised as ad clicks in order to ex-filtrate information and bypass network-based detection systems.

The rootkit gets its commands by scanning the system's memory, looking for byte patterns sent by the controller. These commands can be delivered in multiple ways that are not specifically described but can be assumed to be spam, web pages or network packets sent directly to the system.

Even when blocked by a firewall, they usually end up somewhere in the system's memory. It is not clear if development of this rootkit was completed or not.

Mitigation: Although moving the rootkit payload in memory might increase the analysis time in some cases, there are ways automated tools can use to mitigate this.

First, detecting a payload moving in memory should be in itself a good indicator of compromise and should prompt the system quarantine.

Second, an analysis tool could automatically capture the suspicious payload as it is detected and store it so it can be analyzed at a later time.

Third, by monitoring the memory allocations made within the kernel and the processes memory, it is possible to trace back the allocator of the memory block containing the moving payload. Having a kernel module allocating memory within a user-mode process is rather unusual and should be flagged as such.

In real life, this rootkit would also need to receive external payloads from the C&C. Simply moving in memory is not the ultimate goal and like most malware, this one would need more code running in order to capture keystrokes, screenshots or steal documents, enhancing the chances of detection.

Project B

This project packs two interesting components. The first one is a deployment tool using the target machine's Firewire port. It is used to inject code within the computer's memory while it is running and locked.

Another component of this project is the development of a loading mechanism that uses the MBR (Master Boot Record) or other pre-load functions like the BIOS. Using such a mechanism allows the rootkit to be loaded at boot time, before the OS, and doesn't require a permanent file on disk. This technique is already used by mainstream rootkits like Mebroot or TDL4.

Mitigation: Creating a MBR signature and comparing it to a baseline MBR can help in detecting some of these infections. As for the Firewire port, disabling or removing it is probably the best way to protect the computer against this attack.

The Air Gap Rootkit

This is certainly the most impressive rootkit description from HBGary. It is not clear if this was carried out but the mere thought that this could even exist should keep anyone responsible for securing highly sensitive networks from getting a wink of sleep.

In order to protect their network from Internet attacks, some organizations simply group their most strategic systems in a separate network and disconnect it from their Internet connected network, creating an "air gap" between them.

Each employee uses two computers (one connected to each network) and they manually transfer data from one system to the other by either retyping text, scanning images or in some cases, transferring data through USB sticks.

Although this protection can seem like a good idea at first, it is worth noting that attackers have quickly adapted. More than 25% of malware is now created to propagate through USB. The Stuxnet worm is the best example of such malware.

The neat thing with the HBGary "air gap" rootkit is that it is designed to bridge the two networks without having to go through an external media. Installed on computers in close proximity and connected on both networks, one system encodes data by generating high level frequencies that are captured by the other computer's microphone. This would allow attackers to ex-filtrate data from the secure network to the Internet.

Mitigation: Again, the best way to capture this kind of activity is through live memory analysis and behavior monitoring in order to find integrity violations of trusted processes or the Windows kernel. No matter how advanced the ex-filtration technique is, in the end, you have malware code running in memory and creating anomalies or behaviors that can be identified.

Conclusion

By combining these different approaches to zero-day exploits detailed in other emails, we can see that skilled attackers have the means to bypass most, if not all protection technologies currently available like antivirus, personal firewalls and network IDS.

Malware like this also renders disk encryption, DLP and SIEM solutions mostly irrelevant since it inherits the user's rights, credentials and blends within its activity.

Organizations must look at alternative detection technologies focused on large scale live memory and behavior analysis and develop the knowledge required to detect and block these advanced targeted threats.

Cross-posted from siliciumsecurity.com

Possibly Related Articles:
17735
Network->General
malware Rootkits Anonymous Malicious Code HBGary Federal Magenta
Post Rating I Like this!
850c7a8a30fa40cf01a9db756b49155a
J. Oquendo Re: "Hide in plain sight" discourse...

Mitigation: Since all the rootkit files are visible, it is possible to run the list of installed files against different data sources to confirm they are legitimate.

I am torn in this statement as well as the "Code Signing Validation" theory. We have to remember that Stuxnet had a valid signature from a compromised source. Nevertheless it was valid.

Now, the dependency on anyone other than your own ingenuity and or team is also sketchy. NIST, Bit9 and others are no different from any other company. With the already established compromises of Google, RSA and others, it makes little sense to rely on anyone other than yourself.

Take a look at what was just exposed concerning Comodo. Their network was compromised, fake certs created, who knows what else. Relying on others' data is useless and I'll explain why I state this: From the attacker's perspective, nothing stops me from "upping the ante" by BGP hijacking an AS in order to redirect you to wherever I want to send you. I can netsed[1] checksums to make you feel all safe and warm.

I believe the The proper fix for this would be for an internal hash prior to deploying the machine in question. Tripwire works wonders when configured and deployed properly. Properly in the sense that data is sent of the host being checksummed and that host is as rock solid secure as can be. For my own piece of mind, I created a checksumming script I can call on the fly to do this. Was a proof in concept that never the less works [2]. I takes three distinct checksums so any "hash collisioning" arguments would be moot points.

So... As for memory analysis, this is a very complex subject and I can't think of a company that would be willing to hire staff to monitor what is going on via their machines. As a product, I tend to err on the side of nonsense if a company thinks they can simply watch memory anomalies and report it.

Think about the Voltron concept for a moment here. 5 animals coming together to form a super attack bot. In the electronic/machine/information/cyberwhatever field, there is nothing to stop an attacker from creating snippets of codes that when combined, are the actual application. I put that to the test a while back via a stupid proof of concept I called dsphunxion. dsphunxion is a "backdoor" keeper. The application is nothing more than a shell script with base64 encoding made to look like an ssl certificate signature. To the naked eye, even the most experienced security pros will overlook its simplicity.

When run, dsphunxion takes a look at its environment based on variables, time, etc., and says: "alright, let me make sure I live and prosper on this system without raising suspicion." It then re-copies itself to a randomly chose file that already exists on that system. Nothing is really introduced to the system itself. I use the files on the system against the system itself. Total time to create... 2.5 hours or so and I wasn't even trying.


[1] http://www.freshports.org/net/netsed/
[2] http://www.infiltrated.net/scripts/saki.html
[3] http://en.wikipedia.org/wiki/Voltron
1300985674
Default-avatar
Roger Lamote Code signing validation is part on a step to identify unknown files on your network. If you know easily that those file certs are revoked, their are most suspicous.
Multiples hashing method doesnt help if you system is compromise with malware that return fake information when reading sectors like TDL4..
Memory analysis is done in a couple of products, I think it's important to know what the file is doing , the actions that it perform, so even if they steal certificate, we can know that it inject it's code into explorer.exe.
1301863072
43559f6a0465c923b496a260211995c0
Pascal Longpre You are absolutely right Roger. We've implemented Memory Analysis in our product ECAT to track these behaviors regardless of the cert the file exposes. Remote memory allocations, code injection from the kernel to user mode, remote thread creation, etc... are all uncommon behaviors that must be accounted for, regarless of the reputation a file has.
1301946502
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.