Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Security and the Internet of Things Thu, 24 Jul 2014 00:27:00 -0500 Cyber-attacks continue to become more innovative and sophisticated. Unfortunately, while organizations are developing new security mechanisms, cybercriminals are cultivating new techniques to circumvent them. Along with the growth in the sophistication of cyber-attacks, so has our dependence on the Internet and technology.

The Internet of Things

The day when practically every electronic device will be connected to the Internet is not that far away. According to Cisco, there are approximately 15 billion connected devices worldwide and Dell forecasts that we may see upwards 70 billion connected devices by 2020 -- meaning 10 devices per human, talking to each other and sending out messages.

The Internet of Things (IoT) sensation holds the potential to empower and advance nearly each and every individual and business. In today’s global society, we’re always on and we’re always getting data sources from a variety of different sources. This is the heart of the IoT. Everything is connected and speaking to each other. Warming our cars on a cold morning, regulating thermostats in our homes and determining what your husband took from the refrigerator during his midnight snack, will all be carried out from mobile devices.

Moving forward, IoT devices will help businesses track remote assets and integrate them into new and existing processes. They will also provide real-time information on asset status, location and functionality that will improve asset utilization and productivity and aid decision making. But, the security threats of the IoT are broad and potentially devastating and organizations must ensure that technology for both consumers and companies adhere to high standards of safety and security.

The IoT at Home…and at Work

With the growth of the IoT, we’re seeing the creation of tremendous opportunities for enterprises to develop new services and products that will offer increased convenience and satisfaction to their consumers. The rise of objects that connect themselves to the Internet is releasing an outpouring of new opportunities for data gathering, predictive analytics and IT automation.

The rapid uptake of Bring Your Own Device (BYOD)is increasing an already high demand for mobile applications for both work and home. To meet this increased demand, developers working under intense pressure, and on paper-thin profit margins, are sacrificing security and thorough testing in favor of speed of delivery and the lowest cost. This will result in poor quality products that can be more easily hijacked by criminals or hacktivists.

The information that individuals store on mobile devices already makes them attractive targets for hackers, specifically “for fun” hackers, and criminals. At the same time the amount of apps people download to their personal and work devices will continue to grow. But do the apps access more information than necessary and perform as expected? Worst case scenario, apps can be infected with malware that steals the user’s information – tens of thousands of smartphones are thought to be infected with one particular type of malware alone. This will only worsen as hackers and malware providers switch their attention to the hyper-connected landscape of mobile devices.

With Potential Comes Risk

As I’ve said, the IoT has great potential for the consumer as well as for businesses. While the IoT is still in its infancy, we have a chance to build in new approaches to security if we start preparing now. Security teams should take the initiative to research security best practices to secure these emerging devices, and be prepared to update their security policies as even more interconnected devices make their way onto enterprise networks.

Enterprises with the appropriate expertise, leadership, policy and strategy in place will be agile enough to respond to the inevitable security lapses. Those who do not closely monitor the growth of the IoT may find themselves on the outside looking in.

About the Author: Steve Durbin is managing director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cyber security, BYOD, the cloud, and social media across both the corporate and personal environments. Previously, he was senior vice president at Gartner. 

Copyright 2010 Respective Author at Infosec Island]]>
EBS Encryption: Enhancing the Amazon Web Services Offering with Key Management Wed, 23 Jul 2014 16:22:31 -0500 Amazon Web Services is making great strides in securing its customers' stored data, their “data at rest.” We have seen two recent announcements:

  • Amazon announced S3 Server-Side Encryption with Customer-provided keys (which goes by the not-quite catchy acronym SSE-C). Previously, a user could tell S3 to encrypt data as soon as the data is stored, but Amazon managed the encryption keys and they were never exposed to the customer. With this new feature, users can specify those keys and Amazon will use them when “touching” the data, but will not keep the keys.
  • In another blog post, Amazon announced that Elastic Block Store (EBS) volumes can now be encrypted, but cryptographic keys are managed by Amazon.

Both of these announcements make it easier to encrypt data at rest, improving the security of cloud applications. But something is clearly missing from the second announcement, and the press was quick to point it out: in two words – key management.

Many, perhaps most, AWS customers use EBS volumes to store sensitive data: databases, image files, what have you. With Amazon's new solution, customers will be able to encrypt this data. But the encryption keys will be persisted somewhere on Amazon's infrastructure. This creates a couple of irresistible targets for hackers.

  • One point worthy of attention is a bit of a doomsday scenario. The AWS key storage is a “single point of secrets” holding keys for all customers, for the duration of their disk volumes' lifetime. If someone, a rogue AWS insider or a hacker, could get access to the key storage, the results would be catastrophic. They would be able to decrypt any of the encrypted EBS volumes, of any Amazon customer!
  • Another, less sweeping but perhaps more likely scenario, is if the attacker obtains credentials to a customer account, and is able to snapshot an EBS disk and attach it to a new EC2 instance. Despite the EBS disk being encrypted, once attached to an EC2 instance it can be copied out in the plain: the instance will be automatically provisioned by AWS with the decryption keys. This scenario can be surprising, since many customers believe that encryption should protect them from such a simple attack.

In contrast, when customer-side key management is supported, the customer can decide how to protect their data encryption keys based on customer-specific risk assessment. If needed, different keys can be protected differently. For example, some highly sensitive keys may be kept off-line when not in use.

Customers can decide whether to use hardware-based key management solutions (Hardware Security Modules, HSMs) or whether they prefer pure software approaches that rely on cryptographic techniques to secure the keys. Also some interesting new mathematical approaches, such as Homomorphic Key Management or Split Key Encryption, are becoming available. Customers can establish access control policies that fit the way they are doing business. Keys can be farmed out to specific users, user groups or indeed to customers of Amazon's customers.

Amazon Web Services have clearly gotten the message that customers require more control of their encryption keys, and added this capability into the S3 infrastructure. In fact the S3 solution is extremely easy to use, and can be integrated with key management solutions in a matter of minutes. So we can definitely hope AWS will move in the same direction with EBS.

Full disk encryption is becoming more and more popular in cloud settings, and some of the smaller clouds like Google Compute Engine have supported it for a while. Amazon is a bit late to this game, and should lead the way in enabling customer control of encryption keys. Some customers will never move sensitive data to the cloud. At the other extreme there are cloud customers who would prefer to leave everything to the cloud provider, even at the cost of reduced security and loss of control. But surveys show that the majority of security-aware customers are somewhere in the middle, they would like to get the benefits of a well-managed cloud infrastructure, along with the flexibility of managing their own data security.

Copyright 2010 Respective Author at Infosec Island]]>
White House Website Includes Unique Non-Cookie Tracker, Conflicts With Privacy Policy Wed, 23 Jul 2014 13:59:12 -0500 Yesterday, ProPublica reported on new research by a team at KU Leuven and Princeton oncanvas fingerprinting. One of the most intrusive users of the technology is a company called AddThis, who by are employing it in “shadowing visitors to thousands of top websites, from to” Canvas fingerprinting allows sites to get even more identifying information than we had previously warned about with our Panopticlick fingerprinting experiment.

Canvas fingerprinting exploits the fact that different browsers have slightly different algorithms, parameters, and hardware for turning text into pictures on your screen (or more specifically, into an HTML 5 canvas object that the tracker can read1). According to theresearch by Gunes Acar, et al.,AddThis draws a hidden image containing the unusual phrase “Cwm fjordbank glyphs vext quiz” and observed the way the pixels would turn out differently on different systems.

While YouPorn quickly removed AddThis after the report was published, the White House website still contains AddThis code.  Some White House pages obviously include the AddThis button, such as the White House Blog, and a link to the AddThis privacy policy.

Other pages, like the White House’s own Privacy Policy, load javascript from AddThis, but do not otherwise indicate that AddThis is present. To pick the most ironic example, if you go to the page for the White House policy for third-party cookies, it loads the “addthis_widget.js.” This script, in turn, references “core143.js,” which has a “canvas” function and the tell-tale “Cwm fjordbank glyphs vext quiz” phrase.

The White House cookie policy notes that, “as of April 18, 2014, content or functionality from the following third parties may be present on some pages,” listing AddThis.  While it does not identify which pages, we have yet to find one without AddThis, whether open or hidden.

On the same page that is loading the AddThis scripts, the White House third-party cookie policy makes a promise: “We do not knowingly use third-party tools that place a multi-session cookie prior to the user interacting with the tool.” There is no indication that the White House knew about this function before yesterday's report.

Nevertheless, the canvas fingerprint goes against the White House policy. It may not be a traditional cookie, but it fills the same function as a multi-session cookie, allowing the tracking of unique computers across the web. While the AddThis privacy policy does not mention the canvas fingerprint by that name, it notes that it sometimes places “web beacons” on pages, which would load prior to the user interacting with the AddThis button.

The main distinction is that the canvas fingerprint can’t be blocked by cookie management techniques, or erased with your other cookies. This is inconsistent with the White House’spromise that “Visitors can control aspects of website measurement and customization technologies used on” The website’s How To instructions are no help, because they are limited to traditional cookies and flash cookies.  AddThis’ opt out is no more helpful, as it only prevents targeting, not tracking: “The opt-out cookie tells us not to use your information for delivering relevant online advertisements.”

The White House is far from alone. According to the researchers, over 5,000 sites include the canvas fingerprinting, with the vast majority from AddThis.

What You Can Do to Protect Yourself From Canvas

Fortunately, some solutions are available. You can block trackers like AddThis using an algorithmic tool such as EFF’s Privacy Badger, or a list-based one like Disconnect. Or if you're a fairly knowledgeable user and are willing to do some extra work, you can use a manually controlled script blocker such as No Script to only run JavaScript from domains you trust.

This was cross-posted from EFF's DeepLinks blog. Copyright 2010 Respective Author at Infosec Island]]>
Crypto Locker Down, But NOT Out Wed, 23 Jul 2014 10:32:54 -0500 So, the US govt and law enforcement claim to have managed the disruption of crypto locker. And officials are either touting it as a total victory or a more realistic slowdown of the criminals leveraging the malware and botnets.

Even as the govt was touting their takedown, threat intelligence companies around the world (including MSI), were already noticing that the attackers were mutating, adapting and re-building a new platform to continue their attacks. The attackers involved aren’t likely to stay down for long, especially given how lucrative the crypto locker malware has been. Many estimates exist for the number of infections, and the amount of payments received, but most of them are, in a word, staggering. With that much money on the line, you can expect a return of the nastiness and you can expect it rather quickly.

Takedowns are effective for short term management of specific threats, and they make great PR, but they do little, in most cases, to actually turn the tide. The criminals, who often escape prosecution or real penalties, usually just re-focus and rebuild. 

This is just another reminder that even older malware remains a profit center. Mutations, variants and enhancements can turn old problems like Zeus, back into new problems. Expect that with crypto locker and its ilk. This is not a problem that is likely to go away soon and not a problem that a simple takedown can solve.

This was cross-posted from the MSI's State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Unisys Ponemon study – Is It Actually Relevant to ICSs Tue, 22 Jul 2014 11:00:18 -0500 Unisys sponsored a report by the Ponemon Institute: “Critical Infrastructure: Security Preparedness and Maturity”. The front of the report shows control systems in a process facility. Consequently, the implication is this report is addressing control systems.

It is important to understand the validity of the observations and conclusions as this report is being widely quoted. The report states that 57% of the respondents felt that ICS/SCADA were more at risk and 67% claim that they had cyber compromises over the past year with either confidential information or disruption to operations. Yet from Pie Chart 2, at most 20% of the respondents were directly responsible for control systems. Many of the questions that were asked do not make sense for ICSs and it is also not clear to me how a number of the questions can have answers that total more than 100%. It also is not clear how many of the SCADA/ICS networks were even being monitored? If there were disruption to Operations, the impacts should be obvious with potential physical damage.

To me, the real question is if these are Corporate network issues not control system issues. Some of the questions strongly imply that control system networks have been connected to Corporate networks. For example, why ask questions about e-mail servers? The way some of the questions were asked leads me to believe that the IT organizations may be responsible for some of the control system compromises. Certainly the issue of “maturity” needs to be asked in a different way – how mature are these Corporate organizations in what they are doing TO the ICSs.

This is the second Ponemon report dealing with critical infrastructure that did not have significant ICS input. Consequently, I have discussed my concerns with Larry Ponemon about the need for a report on ICS that has significant ICS involvement and asks the appropriate questions for ICS cyber security.

This was cross-posted from the Unfettered blog.

Copyright 2010 Respective Author at Infosec Island]]>
Black Hat Conference Talk on How to Break Tor Cancelled Tue, 22 Jul 2014 10:52:01 -0500 Organizers of the Black Hat security conference that's scheduled to take place next month in Las Vegas announced that a presentation detailing how the Tor network's users can be de-anonymized has been cancelled.

Michael McCord and Alexander Volynkin, both researchers at Carnegie Mellon University's CERT, should have held a talk titled "Have to be the NSA to Break Tor: Deanonymizing Users on a Budget." The abstract of the presentation, which has been removed from the official Black Hat website, revealed that the researchers have found a way to break the anonymity network by "exploiting fundamental flaws in Tor design and implementation." The experts claim to be able to identify the IP addresses of Tor users and even uncover the location of hidden services with an investment of less than $3,000.

"In our analysis, we've discovered that a persistent adversary with a handful of powerful servers and a couple gigabit links can de-anonymize hundreds of thousands Tor clients and thousands of hidden services within a couple of months," the researchers said in the abstract of their presentation.

However, according to the event's organizers, they had to remove the briefing from their schedule after the legal counsel for the Software Engineering Institute (SEI) and Carnegie Mellon University informed them that "Mr. Volynkin will not be able to speak at the conference since the materials that he would be speaking about have not yet approved by CMU/SEI for public release."

Roger Dingledine, one of the original developers of the Tor Project, clarified on Monday that the organization doesn't have anything to do with the decision to cancel the talk.

"We did not ask Black Hat or CERT to cancel the talk. We did (and still do) have questions for the presenter and for CERT about some aspects of the research, but we had no idea the talk would be pulled before the announcement was made," Dingledine said. "In response to our questions, we were informally shown some materials. We never received slides or any description of what would be presented in the talk itself beyond what was available on the Black Hat Webpage."

Dingledine also took the opportunity to encourage researchers who find vulnerabilities in Tor to disclose them responsibly.

"Researchers who have told us about bugs in the past have found us pretty helpful in fixing issues, and generally positive to work with," he explained. 

About the Author: Eduard Kovacs is a reporter for SecurityWeek

Copyright 2010 Respective Author at Infosec Island]]>
Keeping it Simple - Part 1 Mon, 21 Jul 2014 13:16:27 -0500 Apparently, I struck a nerve with small business people trying to comply with PCI.  In an ideal world, most merchants would be filling out SAQ A, but we do not live in an ideal world.  As a result, I have collected some ideas on how merchants can make their lives easier.

Do Not Store Cardholder Data

It sounds simple, but it amazes me how many small businesses are storing cardholder data (CHD).  In most cases, it is not like they wanted to store CHD, but the people in charge just did not ask vendors that one key question, “Does your solution store cardholder data?”  If a vendor answers “Yes”, then you should continue your search for a solution that does not store CHD.

Even when the question is asked of vendors, you may not get a clear answer.  That is not necessarily because the vendor is trying to hide something, but more likely because the salespeople have never been asked this question before.  As a result, do not be surprised if the initial answer is, “I’ll have to get back to you on that.”  If you never get an answer or the answer is not clear, then you should move on to a different vendor that does provide answers to such questions.

If your organization cannot find a solution that does not store CHD, then at least you are going into a solution with your eyes open.  However, in today’s payment processing application environment, most vendors are doing all that they can to avoid storing CHD.  If the vendors you are looking at for solutions are still storing CHD, then you may need to get creative to avoid storing CHD.

That said, even merchants that only use points of interaction (POI) such as card terminals can also end up with CHD being stored.  I have encountered a number of POIs that were delivered from the processor configured such that the POI was storing full PAN.  Apparently, some processors feel it is the responsibility of the merchant to configure the POI securely even though no such instructions were provided indicating that fact.  As a result, you should contact your processor and have them walk you through the configuration of the POI to ensure that it is not storing the PAN or any other sensitive information.

Then there are the smartphone and tablet solutions from Square, Intuit and a whole host of other mobile solution providers.  While the PCI SSC has indicated that such solutions will never be considered PCI compliant, mobile POIs continue to proliferate with small businesses.  The problem with most of these solutions is when a card will not work through the swipe/dip and the CHD is manually keyed into the device.  It is at that point when the smartphone/tablet keyboard logger software captures the CHD and it will remain in the device until it is overwritten which can be three to six months down the road.  In the case of EMV, the device can capture the PIN if it is entered through the screen thanks to the built in keyboard logger.  As a result, most EMV solutions use a signature and not a PIN.  The reason Square, Intuit and the like get away with peddling these non-compliant POI solutions is that they also serve as the merchant’s acquiring bank and are accepting the risk of the merchant using a non-compliant POI.

The bottom line here is that merchants need to understand these risks and then make appropriate decisions on what risks they are will to accept in regards to the explicit or implicit storage of CHD.

Mobile Payment Processing

The key thing to know about these solutions is that the PCI Security Standards Council has publicly stated that these solutions will never be considered PCI compliant.  Yes, you heard that right; they will never be PCI compliant.  That is mostly because of the PCI PTS standard regarding the security of the point of interaction (POI) for PIN entry and the fact that smartphones and tablets have built in keyboard loggers that record everything entered into these devices.  There are secure solutions such as the Verifone PAYware line of products.  However, these products only use the mobile device as a display.  No cardholder data is allowed to be entered into the mobile device.

So why are these solutions even available if they are not PCI compliant?  It is because a number of the card brands have invested in the companies producing these solutions.  As a result, the card brands have a vested interest in allowing them to exist.  And since the companies offering the solutions are also acting as the acquiring bank for the merchant, they explicitly accept the risk that these solutions present.  That is the beauty of the PCI standards, if a merchant’s acquiring bank approves of something, then the merchant is allowed to do it.  However, very few merchants using these solutions understand the risk these solutions present to them.

First is the risk presented by the swipe/dip device.  Some of these devices encrypt the data at the swipe/dip but not all.  As a result, you should ask the organization if their swipe/dip device encrypts the information.  If it does encrypt, then even if the smartphone/tablet comes in contact with the information, it cannot read it.  If it is not encrypted, I would move on to the next mobile payments solution provider.

The second risk presented is the smartphone/tablet keyboard logger.  This feature is what allows your mobile device to guess what you want to type, what songs you like and a whole host of convenience features.  However, these keyboard loggers also remember anything typed into them such as primary account numbers (PAN), driver’s license numbers and any other sensitive information they can come into contact.  They can remember this information as long as it is not overwritten in the device’s memory.  Depending on how much memory a device has, this can be anywhere from weeks to months.  One study a few years back found that information could be found on mobile devices for as long as six months and an average of three months.

While encrypting the data at the swipe/dip will remove the risk that the keyboard logger has CHD, if you manually key the PAN into the device, then the keyboard logger will record it.  As a result, if you are having a high failure rate with swiping/dipping cards, you will have a lot of PANs contained in your device.

The bottom line is that if you ever lose your mobile device or your trade it in, you risk exposing CHD if you do not properly wipe the device.  It is not that these solutions should not be used, but the purveyors of these solutions should be more forthcoming in the risks of using such solutions so that merchants can make informed decisions beyond the cheap interchange fees.

There are more things merchants can do to keep it simple and I will discuss those topics in a future post.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Five Stages of Vulnerability Management Mon, 21 Jul 2014 09:12:37 -0500 By: Irfahn Khimji

The key to having a good information security program within your organization is having a good vulnerability management program. Most, if not all, regulatory policies and information security frameworks advise having a strong vulnerability management program as one of first things an organization should do when building its information security program.

The Council on Cyber Security specifically lists it as number four in the Top 20 Critical Security Controls.

Over the years, I’ve seen a variety of different vulnerability management programs and worked with many companies with various levels of maturation in their VM programs. This post will outline the five stages of maturity based on the Capability Maturity Model (CMM) and give you an idea as to how to take your organization to the next level of maturity.


The CMM is a model that helps develop and refine a process in an incremental and definable method. More information on the model can be found here. The five stages of the CMM are:


CMMI Staged Approach



In the ‘Initial’ stage of a vulnerability management program there are generally minimal processes and procedures, if any. The vulnerability scans are done by a third-party vendor as part of a penetration test or part of an external scan. These scans are typically done from one to four times per year at the request of an auditor or a regulatory requirement.

The vendor who does the audit will provide a report of the vulnerabilities within the organization. The organization will typically remediate any ‘Critical’ or ‘High’ risks to ensure that they remain compliant. The remaining information gets filed away once a passing grade has been given.

I recently wrote a post on how security is not just a check box anymore. If you are still in this stage, you are a prime target for an attacker. It would be wise to begin maturing your program if you haven’t started already.


In the ‘Managed’ stage of a vulnerability management program the vulnerability scanning is brought in-house. The organization defines a set of procedures for vulnerability scanning. It would purchase a vulnerability management solution and begin to scan on a weekly or monthly basis. Unauthenticated vulnerability scans are run and the security administrators begin to see vulnerabilities from an exterior perspective.

Most organizations I see in this stage do not have support from its upper management, leaving them with a limited budget. This results in purchasing a relatively cheap solution or using a free open source vulnerability scanner. While the lower-end solutions do provide a basic scan, they are limited in the reliability of their data collection, business context and automation.

Using a lower-end solution could prove to be problematic in a couple of different ways. The first is in the accuracy and prioritization of your vulnerability reporting. If you begin to send reports to your system administrators with a bunch of false positives, you will immediately lose their trust. They, like everyone else these days, are very busy and want to make sure they are maximizing their time effectively. Having the trust of the system administrators is a crucial component of an effective vulnerability management program.

The second problem is that even if you verify that the vulnerabilities are in fact vulnerable, how do you prioritize which ones they should fix first? Most solutions offer a High, Medium, Low or a 1-10 score. With the limited resources system administrators have, they realistically can only fix a few vulnerabilities at a time. How do they know which 10 is their most 10 or which High is the most High? Without appropriate prioritization, this can be a daunting task.

Luckily, this section isn’t all doom and gloom! If you’re looking for a great way to start a reliable and actionable vulnerability management program, we at Tripwire offer a small version of our Enterprise level scanner for free. Check out Secure Scan if you haven’t already. It’s not a free trial, but a free license for up to 100 IPs!


In the ‘Defined’ stage of a vulnerability management program the processes and procedures are well-characterized and understood throughout the organization. The information security team has support from their executive management, as well as trust from the system administrators.

At this point, the information security team has proven that the vulnerability management solution they chose is reliable and safe for scanning on the organization’s network. Authenticated vulnerability scans are run on a daily or weekly basis with audience-specific reports being delivered to various levels in the organization. The system administrators receive specific vulnerability reports, while management receives vulnerability risk trending reports.

Vulnerability management state data is shared with the rest of the information security ecosystem to provide actionable intelligence for the information security team.

The majority of organizations I’ve seen are somewhere between the ‘Managed’ and the ‘Defined’ stage. As I noted above, a very common problem is gaining the trust of the system administrators. If the solution that was initially chosen did not meet the requirements of the organization, it can be very difficult to regain their trust.


In the ‘Quantitatively Managed’ stage of a vulnerability management program, the specific attributes of the program are quantifiable and metrics are provided to the management team. The following is a summary of the automation metrics recommended by the Council on Cyber Security:

  1. What is the percentage of the organization’s business systems that have not recently been scanned by the organization’s vulnerability management system?
  2. What is the average vulnerability score of each of the organization’s business systems?
  3. What is the total vulnerability score of each of the organization’s business systems?
  4. How long does it take, on average, to completely deploy operating system software updates to a business system?
  5. How long does it take, on average, to completely deploy application software updates to a business system?

These metrics can be viewed holistically as an organization or broken down by the various business units to see which business units are reducing their risk and which are lagging behind.


Lastly, in the ‘Optimizing’ stage, the metrics defined in the previous stage are targeted for improvement. Optimizing each of the metrics will ensure that the vulnerability management program continuously reduces the attack surface of the organization. The information security team should work together with the management team to set attainable targets for the vulnerability management program. Once those targets are met consistently, new and more aggressive targets can be set with the goal of continuous process improvement.


As one of the top four of the Top 20 Critical Security Controls, vulnerability management is one of the first things that should be implemented in a successful information security program. Ensuring the ongoing maturation of your vulnerability management program is a key to reducing the attack surface of your organization. If we can each reduce the surface the attackers have to work with, we can make this world more secure, one network at a time!

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Cached Domain Credentials in Vista/7 (AKA Why Full Drive Encryption is Important) Thu, 17 Jul 2014 11:09:00 -0500 By: Ronnie Flathers 

Recently, I was conducting a security policy audit of a mid-size tech company and asked if they were using any form of disk encryption on their employee’s workstations. They were not, however they pointed me to a policy document that required all “sensitive” files to be stored in an encrypted folder on the User’s desktop. They assumed that this was adequate protection against the files being recovered should the laptop be lost or stolen.

Unfortunately, this is not the case. Without full disk encryption (like BitLocker), sensitive system files will always be available to an attacker, and credentials can be compromised. Since Windows file encryption is based on user credentials (either local or AD), once these creds are compromised, an attacker would have full access to all “encrypted” files on the system. I will outline an attack scenario below to stress the importance of full drive encryption.


If you are not familiar, Windows has a built in file encryption function called Encrypting File System (EFS) that has been around since Windows 2000. If you right click on a file or folder and go to Properties->Advanced you can check a box called “Encrypt contents to secure data”. When this box is checked, Windows will encrypt the folder and its contents using EFS, and the folder or file will appear green in Explorer to indicate that it is protected:

Encrypted Directory


Now only that user will be able to open the file. Even Administrators will be denied from viewing it. Here a Domain Admin (‘God’) is attempting to open the encrypted file that was created by a normal user (‘nharpsis’):




According to Microsoft’s TechNet article on EFS, “When files are encrypted, their data is protected even if an attacker has full access to the computer’s data storage.” Unfortunately, this is not quite true. The encrypted file above (“secret.txt”) will be decrypted automatically and viewable whenever ‘nharpsis’ logs in to the machine. Therefore to view the files, an attacker only needs to compromise the ‘nharpsis’ account.


In this attack scenario, we will assume that a laptop has been lost or stolen and is powered off. There are plenty of ways to mount an online attack against Windows or extract credentials and secret keys straight from memory. Tools like mimikatz or theVolatility Framework excel at these attacks.

For a purely offline attack, we will boot from a live Kali Linux image and mount the Windows hard drive. As you can see, even though we have mounted the Windows partition and have read/write access to it, we are unable to view files encrypted with EFS:

Permission Denied - Kali

Yes you read that right. We are root and we are seeing a “Permission denied”.

Commercial forensic tools like EnCase have functionality to decrypt EFS, but even they require the username and password of the user who encrypted it. So the first step will be to recover Ned Harpsis’s credentials.

Dumping Credentials

There are numerous ways to recover or bypass local accounts on a windows machine. SAMDUMP2 and ‘chntpw’ are included with Kali Linux and do a nice job of dumping NTLM hashes and resetting account passwords, respectively. However, in this instance, and the instance of the company I was auditing, these machines are part of a domain and AD credentials are used to log in.

Windows caches domain credentials locally to facilitate logging in when the Domain Controller is unreachable. This is how you can log in to your company laptop when traveling or on a different network. If any domain user, including admins, have logged in to this machine, his/her username and a hash of his password will be stored in one of the registry hives.

Kali Linux includes the tool ‘cachedump’ which is intended to be used just for this purpose. Cachedump is part of a larger suite of awesome Python tools called ‘creddump’ that is available in a public svn repo:

Unfortunately, creddump has not been updated in several years, and you will quickly realize when you try to run it that it does not work on Windows 7:

Cachedump Fail

This is a known issue and is discussed on the official Google Code project.

As a user pointed out, the issue persisted over to the Volatility project and an issue was raised there as well. A helpful user released a patch file for the cachedump program to work with Windows 7 and Vista.

After applying the patches and fixes I found online, as well as some minor adjustments for my own sanity, I got creddump working on my local Kali machine.

For convenience’s sake, I have forked the original Google Code project and applied the patches and adjustments. You can find the updated and working version of creddump on the Neohapsis Github:


Now that I had a working version of the program, it was just a matter of getting it on to my booted Kali instance and running it against the mounted Windows partition:

Creddump in action

Bingo! We have recovered two hashed passwords: one for ‘nharpsis’, the user who encrypted the initial file, and ‘god’, a Domain Admin who had previously logged in to the system.

Cracking the Hashes

Unlike locally stored credentials, these are not NT hashes. Instead, they are in a format known as ‘Domain Cache Credentials 2′ or ‘mscash2′, which uses PBKDF2 to derive the hashes. Unfortunately, PBKDF2 is a computation heavy function, which significantly slows down the cracking process.

Both John and oclHashcat support the ‘mscash2′ format. When using John, I recommend just sticking to a relatively short wordlist and not to pure bruteforce it.

If you want to attempt to use a large wordlist with some transformative rules or run pure bruteforce, use a GPU cracker with oclHashcat and still be prepared to wait a while.

To prove that cracking works, I used a wordlist I knew contained the plaintext passwords. Here’s John cracking the domain hashes:

Cracked with John

Note the format is “mscash2″. The Domain Admin’s password is “g0d”, and nharpsis’s password is “Welcome1!”

I also extracted the hashes and ran them on our powerful GPU cracking box here at Neohapsis. For oclHashcat, each line must be in the format ‘hash:username’, and the code for mscash2 is ‘-m 2100′:


Accessing the encrypted files

Now that we have the password for the user ‘nharpsis’, the simplest way to retrieve the encrypted file is just to boot the laptop back into Windows and log in as ‘nharpsis’. Once you are logged in, Windows kindly decrypts the files for you, and we can just open them up:



As you can see, if an attacker has physical access to the hard drive, EFS is only as strong as the users login password. Given this is a purely offline attack, an attacker has unlimited time to crack the password and then access the sensitive information.

So what can you do? Enforce full drive encryption. When BitLocker is enabled, everything in the drive is encrypted, including the location of the cached credentials. Yes, there are attacks agains BitLocker encryption, but they are much more difficult then attacking a user’s password.

In the end, I outlined the above attack scenario to my client and recommended they amend their policy to include mandatory full drive encryption. Hopefully this straightforward scenario shows that solely relying on EFS to protect sensitive files from unauthorized access in the event of a lost or stolen device is an inadequate control.

This was cross-posted from the Neohapsis blog.

Copyright 2010 Respective Author at Infosec Island]]>
Snowden Continues to Expose Allied Cyber Tactics Thu, 17 Jul 2014 10:52:42 -0500 NSA whistleblower and Putin poster boy Edward Snowden apparently released yet another document, this one exposing UK cyber spying techniques allegedly used by the GCHQ.

The document, according to The Intercept lists multiple tools that the UK intelligence agency used to spy on social media accounts, interrupt or modify communication and even modify online polls.

Tools like:

  • UNDERPASS – Change outcome of online polls
  • SILVERLORD – Disruption of video-based websites hosting extremist content
  • ANGRY PIRATE – Permanently disables a target’s account on a computer
  • PREDATORS FACE – Targeted Denial Of Service against Web Servers
  • And several others.

The release again leaves me scratching my head.

From ancient times countries spied on each other, even their allies. Only the most naive would assume this practice has magically stopped in the online age. I do love how shocked governments appeared in the media when they found out that the NSA was snooping on them, what a joke.

And in this case, several of these tools listed sound like they are more geared towards fighting or countering online use of enemy communications possibly by Islamic militants.

One would have to ask, does this release from Snowden make the people of the UK or the US safer from government snooping, or more likely would it tell enemy nations exactly what tools have been and will be used against them?

Again with Snowden one would have to ask, is he a champion of internet privacy or simply just a traitor to the US and her allies, exposing tools and techniques used against foreign nations and in the war on terror?

With Snowden pushing for an extension of his stay in Russia, it would seem the later would be correct.

 This was cross-posted from the Cyber Arms blog.

Copyright 2010 Respective Author at Infosec Island]]>
Compliance and Security Seals from a Different Perspective Wed, 16 Jul 2014 12:04:12 -0500 Compliance attestations. Quality seals like “Hacker Safe!” All of these things bother most security people I know because to us, these provide very little insight into the security of anything in a tangible way. Or do they? I saw this reply to my blog post on compliance vs. security which made an interesting point. A point, I dare say, I had not really put front-of-mind but probably should have.

Ron Parker was of course correct…and he touched on a much bigger point that this comment was a part of. Much of the time compliance and ‘security badges, aka “security seals” on websites, aren’t done for the sake of making the website or product actually more secure … they’re done to assure the customer that the site or entity is worthy of their trust and business. This is contrary to conventional thinking in the security community.

Think about that for a second.

With that frame of reference, all the push to compliance and all the silly little “Hacker Safe!” security seals on websites make sense. Maybe they’re not secure, or maybe they are, but the point isn’t to demonstrate some level of absolute security. The point is to reassure you, the user, that you are doing business with someone who thought about your interests. Well…at least they pretended to. Whether it’s privacy, security, or both… the proprietors of this website or that store want to give you some way to feel safe doing business with them.

All this starts to bend the brain a bit, around the idea of why we really do security things. We need to earn someone’s business, through his or her trust. The risks we take on the road to earn their business …well that’s up to us to worry about. Who do you suppose is more qualified to make the assessment of ‘appropriate risk level’ – you or your customers? With some notable exception the answer won’t be your customers.

Realistically you don’t want your customers trying to decide for themselves what is or isn’t appropriate levels of security. Frankly, I wouldn’t be comfortable with this either. The reality behind this thinking is that the customer simply doesn’t know any better, typically, and would likely make the wrong decision given the chance. So it’s up to you to decide, and that’s fair. Of course, this makes the assumption that you as the proprietor have the customer’s interests in mind, and have some clue on how to do risk assessments and balance risk/reward. Lots to assume, I know. Also, you know what happens when you ass-u-me, right?

So let’s wind back to my point now. Compliance and security seals are a good thing. Before you pick up that rock to throw at me, think about this again. The problem isn’t that compliance and “security seals” exist but that I think we’re mis-understanding their utility. The answer isn’t to throw these tools away and create something else, because that something else will likely be just as complicated (or useless) and needlessly waste resources on solving a problem that already is somewhat on its way. Instead, let’s look to make compliance and security seals more useful to the end customer so you can focus on making that risk equation balance in your favor. I don’t quite know what that solution would look like, yet, but I’m going to investigate it with some smart people. I think ultimately there needs to be some way to convey the level security ‘effort’ by the proprietor, which becomes binding and the owner can be held liable for providing false information, or stretching the truth.

With this perspective I think we could take these various compliance regulations and align them with expectations that customers have, while tying them to some security and risk goals. This makes more sense than what I see being adopted today. The goal isn’t to be compliant, well, I mean, it is … but it’s not to be compliant and call that security. It’s to be compliant as a result of being more secure. Remembering that the compliance thing and security seal is for your customers is liberating and lets you focus on the bigger picture of balancing risk/reward for your business.

This was cross-posted from the Follow the Wh1t3 Rabbit blog.





Copyright 2010 Respective Author at Infosec Island]]>
Security: Not Just a Checkbox Anymore Tue, 15 Jul 2014 10:30:00 -0500 By: Irfahn Khimji

There have been many publicized victims of breaches recently. There can often be a lot of conjecture as to what happened, how it happened, and why it happened.

Did they have security controls in place? Are they getting accurate information? Is the information they are getting actionable? Is anyone actually actioning this actionable information?

These are all questions that we, as security practitioners, should be asking ourselves on a daily basis. All of which are proof that security and compliance cannot just be a check box item anymore.

For example, within my organization, my security team and I may have acquired and filled in all the audit requirements of reporting on vulnerabilities, reporting on changes in my environment, and logging all my events of interest. However, what is happening with that information?

Are they just being filed away so that when the audit team rolls around they can give me my customary passing check mark or are the findings actually being remediated?

What systems am I covering in my organization? Am I only covering the 10% of my systems that are within scope of my audit? What if an attacker leverages an out-of-scope system within my organization as a stepping stone towards my more critical assets? Do I even know what systems are on my network? Do I know what software is installed on those systems? Is that software patched and secured?

As a security practitioner in your organization, I encourage you to take a minute and answer these questions to yourself. Answering these questions is a great first step towards building a deeper understanding of the surface area of risk within your organization.

Let’s take a minute to look at this from the keyboard of an attacker. If I was to target your organization I’m looking for the low hanging fruit:

  • What systems has this organization forgotten about?
  • What vulnerabilities are on these systems?

Chances are if they have been forgotten about, there are some vulnerabilities I can exploit with great ease!

Is this organization monitoring for changes on their network? If not, I can turn off logging and create my own back doors without anyone noticing!

What We Need to do as Defenders:

As defenders of our organization, we need to ensure that we are establishing an appropriate secure technology culture within our organizations. As more and more breaches are being publicized, business owners are becoming more aware of the risk associated with poor security practices.

Now is a great time to leverage a framework such as the Top 20 Critical Security Controls to get the support of key executives.

For more information, check out this post on Demonstrating Enterprise Commitment to Best Practice and Using the Top 20 Critical Security Controls to get Your CFO’s Attention.

If we treat information security as more than just a checkbox, we can make this world more secure one network at a time!

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Hacks of Houston Astros, Butler University Put Network Security on Center Stage Mon, 14 Jul 2014 11:07:06 -0500 Even though the Houston Astros have been the worst team in Major League Baseball for the last three seasons, one of the team’s off-the-field accomplishments — its proprietary internal computer database — is now the envy of the rest of the league.

This system, known as Ground Control, allows the team’s front office executives to centralize and exchange information about player contracts, scouting reports and statistics — all through one web address.

Yet, even as news story after news story praised Ground Control and general manager Jeff Luhnow, who is much of the brains behind the system, Luhnow himself spoke about his “low-level but omnipresent worry” around Ground Control — that the sensitive information it contained could be exposed. Given Luhnow’s past work as a technology entrepreneur, his risk-averse approach should come as no surprise.

In March, Luhnow told the Houston Chronicle that the team had insulated itself from risk by only giving employees access to the specific information they needed to make decisions.

Despite all these precautions, an outside hacker infiltrated Ground Control last month, revealing private conversations that the Astros had with other Major League Baseball teams. In the wake of the incident, Luhnow has said the team is working to upgrade its remote access security infrastructure and he, for the time being, has gone back to using a pencil and paper to take notes, just to be safe.

In acknowledging the “double-edged sword of technology,” he said that other teams should also evaluate their own remote access security, because, in his words, “If it happened to us, could it have happened to other clubs?”

The Astros leak is interesting because it’s thrust into the spotlight an organization whose network security practices generally aren’t newsworthy — when was the last time you thought about how a baseball team secures its data?

Similarly, when was the last time you thought about how your college or university manages personal information about members of its community?

If you’re a student, alumnus, or staff or faculty member affiliated with Butler University, the thought has definitely crossed your mind in the last few weeks, following news of a remote hack that targeted the school. The attack is believed to have compromised the personal information — birth dates, Social Security numbers and bank account information — of up to 200,000 people in the Butler community.

Although a suspect has been arrested, the investigation is still ongoing. Meanwhile, Butler has already taken steps to patch up its remote access infrastructure.

Enterprise-Quality Network Security — Not Just For Enterprises

Together, the high-profile hacking of the Houston Astros and Butler University show why it’s important for every organization to think like an enterprise in constructing a network security plan. It’s not just enterprises or retailers like eBay and Target that can be victimized and subsequently lose the trust of their customers if a breach occurs.

As more information about both hacks are revealed, many news stories will focus on preventative measures — and rightfully so. What they should say is that it’s most important for a company to limit its network security vulnerabilities, and the best way to do that is through a comprehensive security framework that can secure every possible access point into your company. Attackers are persistent and creative — if they’re unable to breach the first line of defense, they’ll just keep prodding until they find a point-of-entry. Companies need a “kitchen sink” approach, from firewalls and VPN solutions that shore up remote access to rigorous employee training.

You’ll notice we didn’t mention Luhnow’s temporary “pen and paper” solution. That’s because it’s important not to be scared away from technology in the aftermath of these types of incidents. They’ll continue to happen, but with the right network security approach, your business will be spared the embarrassment and front page headlines that follow a hack.

This was cross-posted from the VPN HAUS blog.

Copyright 2010 Respective Author at Infosec Island]]>
Is BYOD Security Really Concerned with Safety – or Is It About Control? Mon, 14 Jul 2014 09:54:00 -0500 If you are a regular follower of this blog, you’ve probably noticed that I haven’t been writing much in the past few months. I have simply been too busy, traveling and speaking at some really great security conferences.

The most recent and the most informative (for me at least) was the International NCSC One Conference 2014 at the World Forum in The Hague. This is a massive and well organized event run by the Netherlands National Cyber Security Centre, the Dutch equivalent to the US-CERT. Close to 950 people listened to my talk on “The Internet of Insecure Things.”

During NCSC One I heard some great talks on the state of encryption technology today, SCADA security consortiums, and foreign APT threats. But the highlight was the plenary speech by Jon Callas entitled “Security and Usability in the Age of Surveillance.” Jon’s talk focused on Bring Your Own Device (BYOD) security, but it raised some questions that are core to cyber security in the 21st century.

If you’re not familiar with the BYOD security debate and want to get some background, check out my blog on the topic: The iPhone is coming to the Plant Floor – Can we Secure it?. The short version is that the BYOD controversy revolves around the possible security issues that arise when employees use their personal mobile devices to access privileged company resources. A common example is using your iPhone to access your company’s email system – does this increase or decrease corporate security?

Does using personal devices on the plant floor increase or decrease corporate security?


What is that Security Policy REALLY Trying to Achieve?

The first question that Jon brought up was around understanding the real goals of any security policy or program. While security traditionalists talk about ensuring Confidentiality, Availability, and Integrity, Jon suggested that the real goals can be divided into two more general ones:

  • Maintaining Safety
  • Maintaining Control

Most of the time the reason given for a specific security policy is safety – for example, securing a SCADA system to ensure the safety of the processes, people, and products. This reason is hard to argue with. After all, who wants to be less safe?

In reality there are many security policies that have nothing to do with safety; instead, they are about maintaining IT control. Now this isn’t necessarily bad, but it is a lot harder to sell compared to the safety argument. So the safety excuse gets rolled out every time.

Enter the Evil Smart Phone

Jon then explained how this relates to the BYOD controversy. When mobile devices first came onto the market, the IT department loved the BlackBerry. Like the mainframe and the central server, the BlackBerry architecture centralized everything. Every email you sent and every note you made passed under the watchful eyes of the IT department. Any other mobile device was banned because it was “unsafe” for confidential company information.

Unfortunately for Blackberry, the real customer wasn’t the IT department, but rather the end user. When the user was a lowly engineer or a sales person, the iPhone, iPad, or Android could be safely ignored. But once company CEOs started to buy iPhones and saw how effective they were, suddenly IT had to start accepting other mobile devices.

The flood gates burst open and soon iPhones and Androids dominated the corporate world while Blackberry withered to a shadow of its former glory.

Yet to this day we still hear lots of crying about how insecure personal mobile devices are and how the IT department has to “bring the problem under control.” There are endless pitches for BYOD security products and no shortage of corporate policies (many of questionable effectiveness) intended to “manage the problem.” The reason always given is the “safety and security” of corporate intellectual property.

Eric Byres presenting at the International NCSC One Conference 2014 in The Hague, Netherlands on June 3rd.


Tell Me Again Why my Company Laptop is More Secure than my Personal iPhone...

But is the iPhone or Android really the security risk the IT world claims? Or are they just annoyingly difficult to maintain centralized control over?

Sure, smart phones aren’t perfect, but how many truly effective rootkits have you seen for attacking iPhones? Now consider how many rootkits there are for taking over PCs. How many serious mobile device vulnerabilities have you needed to quickly patch in the last year? Maybe two? And how often do you have to install a critical Windows, Java, or Adobe patch on your PC? Every week? As Jon put it: “Antivirus software for the mobile device is not exactly a growth market.”

In fact, it may be that personal phones are actually more secure than all the other devices that are welcomed by traditional IT.

Smart phones are also more carefully guarded by their owners. Jon quoted studies showing that, on average, people noticed and reported a missing phone in less than 20 minutes compared to 24 hours for a missing wallet. If someone stole my laptop on a weekend, it could be two days before I noticed. And once an iPhone goes missing, the remote wipe features are very effective. I doubt my IT department could ever wipe the laptop they gave me if I happen to lose it.


Mobile Devices are NOT Perfect but...

To be clear, Jon is NOT saying that mobile devices are perfectly secure – far from it. But all the evidence suggests that they are more secure than any other common computing device currently in use. Thus the argument to tangle up iPhones and Androids in red tape is just an excuse. And industry might just be better off from a security point of view if we embraced — or even encouraged — the mobile device on the plant floor. It certainly is worth considering.


Picking the Right SCADA Security Battles

I often think that safety as an excuse for control is common in airport security. Many of the restrictions and processes required by both the TSA and the airlines with the “We’re doing this for your protection” justification appear to be a way to make the customer easier to control (or are an excuse to cut services).

For example, The Atlantic magazine reported that a TSA employee confessed to reporter Jeff Goldburg that the purpose of enhanced pat downs was to make opting out of full body scans so unpleasant that everyone would quiescently choose to go through the scanner. This would make the inspection process quicker and cheaper for the TSA.

Causing people frustration never leads to better security; it just encourages rebellious behavior. This is doubly true for the industrial world. It is human nature that people (especially engineers!) only have so much patience for security policies that make their job harder to do.

Institute a few security controls that offer clear safety benefits and people will respect them. Throw too many controls in a person’s way and they will find a way to circumvent them so they can get their job done. Unfortunately people don’t necessarily pick the least effective controls to ignore – they might obey the ineffective measures and bypass the important ones.

Thus, as SCADA security professionals we need to pick our security battles carefully. After listening to Jon, I will be looking deeper into the real goals of any SCADA security policy or technology I am exposed to. Is it really helping make SCADA and ICS safer? Or is it just a way to make control easier? Is it addressing the real risks? Or is it just for show? Fail to ask these questions and we risk creating a backlash against the whole SCADA/ICS security message. And that will be a loss for the entire industry.

Cross Posted from the Tofino Security Blog

Copyright 2010 Respective Author at Infosec Island]]>
Why No Security Analytics Market? Thu, 10 Jul 2014 12:31:30 -0500 So, occasionally I get this call from somebody (vendor, end-user, investor, etc) inquiring about“the size of the security analytics market.” They are usually shocked at our answer: since there is no such market, there is no size to report.

If you recall, we [as well as myself] don’t really believe there is such a market at this time and find any discussion of its size “premature” (at least). Let’s explore this in detail – and hopefully save some of my time for loftier pursuits.

In essence, if you are in the market for a car, you are very unlikely to buy a toilet bowl or a jet plane instead. Everybody knows what is a car, what it does, how it functions [well, at some level] and how much it costs. Sure, there is a difference between a Kia and a Maserati, but such variances are easily understood by the customers. While market definition in general is hard, industrial organization (IO) economics have made a lot of practical advances towards that goal (for example, some use “the smallest area within which it is possible to be a viable competitor”). Close to home in our infosec (“cyber security”?) realm, if you need DLP, you go and buy DLP. If you need a WAF, you go get that. Even with SIEM, there is relative clarity in terms of features, benefits and prices.

Do we see ANYTHING of this sort when “security analytics” is mentioned?

No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no! :-)

There is no common feature set, no critical/core capabilities, no jointly understood need, no buyer-seller agreement on anything, no clear competitive dynamics ….

As we say in our paper “defining “security analytics” at this point simply involves looking up the words in the dictionary. There is no “security analytics market” or dedicated and purchasable “security analytics tools”; security analytics is a concept that an organization can practice, but can’t buy. Many different tools — from network intrusion prevention system (NIPS) to DLP and SIEM — use various algorithms to analyze data, thus performing analytics. Thus, if security-relevant data is subjected to analytic algorithms, security analytics is being practiced.” Along the same line, one enterprise I spoke with defined it as “ability to analyze lot of security data over long periods of time, find threats and create models” [not too specific – but hitting a few interesting things such as long term analysis, threat discovery, models, etc]

In fact, I can give you a handy analytical tool to create your very own “security analytics” vendor – right here, right now! FREE!!

Here is how it works – pick one or more from each item 1.-4. below:

  1. Pick a problem to solve (sadly, some vendors have skipped this step altogether; others chose really hard, fuzzy problems like insider threat or “advanced” threat)
  2. Collect some data (some logs, network flows, session metadata, full packets, threat intelligence, process execution records, whatever – the more, the merrier!)
  3. Analyze it in some way (ideally not by using rules, but any algorithm will suffice – think various types of ML [supervised or unsupervised], clustering, deep anything, forensics something, text mining, etc]
  4. Present the results in some way (ideally visualize, but – if you are adventurous – also act automatically, reconfigure, etc)

That’s it! In your mind, you are now a player in a burgeoning [in your mind] “security analytics market”…

BTW, if you want to hear me ramble about it even more, check out this podcast [MP3]

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
Cyber Espionage Campaign Hits Energy Companies Thu, 10 Jul 2014 11:47:38 -0500 Over the past couple of weeks, cybersecurity vendors have announced the uncovering of a successful cyber espionage campaign carried out by the Dragonfy hacking group. In the most recent string of attacks, Dragonfly (also referred to by the name Energetic Bear) has targeted multiple US and European energy companies, successfully looting valuable process information in what appears to be the next step in the cyber warfare campaign against critical infrastructure organizations, after Stuxnet in 2010. Cybersecurity vendors have scrutinized the campaign and presented an analysis of the malware employed by Dragonfy to steal information from the infected computers.

Yesterday, a short paper I co-authored with Security Matters was released. This short paper revisits the main points of this investigation, including additional details into the specifics of the components of the campaign that exploit industrial control systems. This paper also illustrates why the implementation of a defense-in-depth (DiD) strategy is key to successfully counter cyberthreats like Dragonfly. One of the key aspects of improved DiD involves improving situation awareness within industrial architectures. SilentDefense ICS is one key element in the overall process of gaining insight into your ICS architectures allowing early detection and rapid mitigation of cyber threats.

A complete copy of the paper is available by clicking here.

I am currently actively engaged in research of the campaign and the malware employed. In the coming weeks, I will also be releasing another paper that will discuss in details the overall campaign, how the various pieces of the attack are being deployed, and how they are being used against companies relating to industrial automation and control. Stay tuned to and follow watch my Twitter feed for additional release details.

This was cross-posted from the SCADAhacker blog.

Copyright 2010 Respective Author at Infosec Island]]>