Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Suits and Spooks London: Register Now for Early Bird Rate Wed, 04 Mar 2015 13:40:47 -0600 48 hours Left for Early Bird Pricing for Suits and Spooks London!

May 6-7, 2015 | techUK

Suits and Spooks London 2015 will be our first 2-day international event, and is jointly produced with techUK, an association of over 850 companies that's funded by the British government.

Suits and Spooks Logo

Speakers will include Mrs. Marina Litvinenko (invited pending confirmation), the widow of Alexander Litvinenko, a Russian FSB officer who was granted asylum in London and was assassinated in 2006.

Other speakers include Kim Zetter who will speak about her book about Stuxnet "Countdown to Zero Day", Jeffrey Carr, who will reveal information on Cyclone, an FSB case study on technology acquisition, and New York Police Department Deputy Commissioner Zach Tumin will speak about the NYPD's use of social media to catch criminals.

The agenda (still in development) and a list of speakers is available at, and registration for the two day event, which includes continental breakfast, lunch and tea breaks is available at our low early bird rate until Friday March 6!

Register Now and Save

Copyright 2010 Respective Author at Infosec Island]]>
Angler Exploit Kit Uses Domain Shadowing to Evade Detection Wed, 04 Mar 2015 12:42:10 -0600 The notorious Angler exploit kit has started leveraging a new technique to ensure that its malicious activities are not interrupted when the domains it uses are blacklisted, researchers at Cisco revealed on Tuesday.

The Angler exploit kit has made numerous headlines over the past few months after cybercriminals integrated Adobe Flash Player zero-days and Internet Explorer exploits. Experts believe Angler is currently one of the most sophisticated and widely used exploits kits.

The new technique spotted by Cisco, dubbed “domain shadowing,” involves compromised domain registration accounts. The attackers hijack these accounts, usually through phishing, and they use them to create subdomains.

Researchers have identified hundreds of compromised domain registration accounts that give cybercriminals access to several thousand domains. On these domains, the attackers have created roughly 10,000 unique subdomains, which they have been using to redirect victims to the exploit kit landing pages, and to host the actual landing pages and exploits.

In the campaign observed by Cisco, which has been running since late December, the cybercrooks quickly rotate both the subdomains and their IP addresses. This makes it more difficult to blacklist the subdomains and IP addresses, and it gives researchers only a short timeframe to analyze the exploits.

Hijacking domain registration accounts can be highly lucrative. On one hand, the attackers create a large number of disposable subdomains that they can use in their operations. In this case, Cisco has determined that only a third of the compromised domains have been utilized so far, which means the cybercriminals still have plenty to work with in the future.

Read the rest of this story on 

Copyright 2010 Respective Author at Infosec Island]]>
Is Compliance Bad for Security? Wed, 04 Mar 2015 12:40:19 -0600 By: Edd Hardy

Companies like mine, and consultants like me, have long been instructed and expected to pass on the mantra that the solution to security is compliance with standards and that being in compliance means you are secure.

Having worked in the industry for more than a decade, I know that this is demonstrably not true. My hypothesis is that compliance and security need to be seen as two separate entities. That said… they are linked. Being secure can help with achieving compliance; in fact, compliance can be a by-product of security but security is not automatically a by-product of compliance. You can be compliant without being secure.

Compliance is designed to get organisations to an agreed standard. The idea is to drag everyone up to a minimum level of security, but equally to enable organisations to demonstrate to customers and stakeholders alike that they meet a basic set of security standards.However, as we have seen in recent attacks, hacked companies have told the world they are complaint.

We presume these companies have had external advisory from reputable and certified consultancies that have marked them as compliant. Yet, the companies in question have often been hacked in ways that the compliance standards, they reportedly were adhering to, were designed to prevent.

Effectively, what is the value in a standard for assuring clients and stakeholders of a company’s security if some people apparently lie?

The consultancy, certification and advisory industry make a profit by selling compliance. They have to tread a fine line between helping organisations to improve their security measures and placating them. Why does this tension exist? Because it’s not uncommon for organisations to fire consultants they find to be too hard.

Consultants are put under pressure to let things slide. If paying your bills is dependent on letting things slide for an organisation, some consultants will do just that. Whilst there are controls to manage this risk, independent audits, reviews etc., none of them are perfect.

I am by no means saying that all consultants are prepared to sign anyone off who pays. However, I absolutely come across organisations that have been marked as compliant, that during an audit are found to be so far from compliance that it takes my breath away. These organisations have either lied to the consultants (which any decent consultant should find through checks), or the consultant has lied. The problem is ongoing.

To get compliant costs money, to remain compliant costs money, and money introduces conflicting motives. They also, in some cases, effectively operate as a cartel—for example, under the PCI standard, you need to use ASV (approved scan vendor) scanning, which includes a limited list of approved vendors, all of whom have paid money to be on the list and proven their scans meet the standard. They effectively get a large market that can only go to a limited number of suppliers, like shooting fish in a barrel.

Compliance as an annual process is an interesting idea—a year is a reasonable period to rectify issues. However, even with an external, independent (reputable) auditor, an organisation’s actual compliance status cannot be guaranteed at any time apart from the day the organisation successfully passed its last audit.

If you consider the number of changes a large organisation makes on a weekly basis, the fact it was audited six months ago, cannot possibly indicate that today it is definitely still compliant with the standard it was audited against. This issue can absolutely be mitigated through conscientious examining and tracking of standards. Nonetheless, the reality of compliance being a static measure of a dynamic situation highlights the folly of taking a compliance certificate as an indication of current compliance. Despite the pervasiveness of accepting compliance certificates as an accurate indication of an organisation’s compliance status, the practice carries real risk.

The standards of consultants are variable. It is not uncommon to find graduates with a few years’ experience being used as auditors and signing organisations off. As audit is a profitable business for consultancy organisations, getting recruits and training them to operate to the standard (which is relatively basic), means we have cheap consultants being sold expensively and operating at the most basic level.

They might well get you through the standard but the customer often knows more, and the consultant’s objective is to get you signed off and certified and move onto the next job. The more in-depth consultants will want to make absolutely sure you meet the standard and will continue to do so. They will want to understand at a very technical level what you do and how you do it, while the really good consultants will want to make damn sure you exceed the standard.

Most compliance standards also allow for the scope of the compliance to be limited. For example, PCI has the CDE (Card Data Environment) and ISO 27001 has the scope of compliance. Effectively, you are able to delimit the area of compliance. Again, this becomes about demonstrating compliance to third-parties, not about securing the full environment.

PCI is an excellent example of limiting the scope of security measures to a narrowly defined area. PCI is only concerned with securing card data, it doesn’t consider anything else (this makes sense when you consider the card industry wrote the standard). The problem is that organisations are using PCI compliance as an indication that they are secure but it doesn’t mean that; it can’t. PCI compliance only indicates that the card data handled and stored by an organisation is being secured. The rest of the customer’s data held by an organisation is not accounted for. This represents a disconnect from reality. Whilst for PCI card data should be stored in a more secure CDE, if an organisation’s less important networks can be hacked, then hackers in time can work their way into the CDE.

Standards do not reward for over compliance. Most standards are simply a pass or fail – you’re either compliant or you’re not. This means that organisations will aim for the lowest possible standard, i.e scrape through. Given that standards are designed to be achievable, (if they are too difficult people will not comply or lie, standards have to be achievable). So, standards are the lowest common denominator. Effectively, you end up with a weak standard. There is an argument to be had that a weak achievable standard is better than a more aggressive standard that organisations will give up on. However, my opinion is that this waters down standards too far. Perhaps it’s time for grades in compliance.

Compliance is often seen as an attempt to offer a return on investments in security. Effectively, you are getting something for the money you spend on security – a certificate or logo to go on the website – it is understandable that the board want something in return for the money spent, but what they are getting is compliance with a standard, not necessarily an improvement in security. It can be argued that spending the money on pure security would produce better security, but it’s very hard to demonstrate this benefit to stakeholders. Investors, other organisations and regulators all want tangible value for their investment.

Am I saying compliance is bad? No, absolutely not. What I am saying, is that compliance for compliance’s sake does not automatically improve security in a meaningful way. However, security to improve security, does improve security and can have compliance as a framework. It is also absolutely true that compliance can be a framework to hang security off and help people understand it, but on its own, compliance won’t achieve much.

As a security consultant who has worked across most industries, I have seen compliance done for the sake of compliance, as well as compliance done to make things better as part of security. When it’s done on its own, it is an indication that security is not on the radar, and that’s a real worry, even if an organisation becomes compliant.

This was cross-posted from Tripwire's State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Malware Can Hide in a LOT of Places Wed, 04 Mar 2015 12:21:34 -0600 This article about research showing how malware could be hidden in Blu-Ray disks should serve as a reminder to us all that a lot of those “smart” and “Internet-enabled” devices we are buying can also be a risk to our information. In the past, malware has used digital picture frames, vendor disks & CD’s, USB keys, smart “dongles” and a wide variety of other things that can plug into a computer or network as a transmission medium.

As the so called, Internet of Things (IoT), continues to grow in both substance and hype, more and more of these devices will be prevalent across homes and businesses everywhere. In a recent neighbor visit, I enumerated (with permission), more than 30 different computers, phones, tablets, smart TV’s and other miscellaneous devices on their home network. This family of 5 has smart radios, smart TVs and even a Wifi-connected set of toys that their kids play with. That’s a LOT of places for malware to hide…

I hope all of us can take a few minutes and just give that some thought. I am sure few of us really have a plan that includes such objects. Most families are lucky if they have a firewall and AV on all of their systems. Let alone a plan for “smart devices” and other network gook.

How will you handle this? What plans are you making? Ping us on Twitter (@lbhuston or @microsolved) and let us know your thoughts.

This was cross-posted from the MSI State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Weaknesses in Air Traffic Control Systems are a Serious Issue for FAA Wed, 04 Mar 2015 09:31:00 -0600 A GAO report to FAA reveals that the systems adopted in the Aviation industry are still affected by weaknesses that could be exploited by hackers.

report published by Government Accounting Office (GAO) in January urges the Federal Aviation Administration (FAA) to adopt a formal process to “Address Weaknesses in Air Traffic Control Systems.” The FAA has taken steps to protect its air traffic control systems from threats, including cyber threats, but according to the GAO, the systems adopted in the Aviation industry are still affected by weaknesses that could be exploited by hackers.

The weaknesses addressed in the report include prevention, detection and mitigation of unauthorized access to computer resources used in the industry. The weaknesses mentioned in the document are related to controls for protecting system boundaries, user identification and authentication, protection of sensitive data, access controls, auditing and monitoring activity.

The report doesn’t address specific vulnerabilities, but provided a series of indications related to the shortcomings in the FAA approach to cyber security.

The Government Accounting Office report highlights that security of the operational national airspace system (NAS) environment depends on the level of security implemented for each single component, for this reason it is essential to adopt a formal process to identify and eliminate the weaknesses reducing the risks for cyber attacks.

The GAO criticized the approach to cyber security of the FAA, which has ignored NIST and Federal Information Security Management Act (FISMA) guidelines.

National Institute of Standards and Technology guidance urges agencies to establish and implement a security governance structure, an executive-level risk management function, and a risk management strategy. In compliance with the NIST guidance, the FAA has established a Cyber Security Steering Committee that implements the risk management function, but lack of a governance structure.

It is very serious that the FAA hasn’t clearly established roles and responsibilities approaching information security for the NAS.

” FAA has established a Cyber Security Steering Committee to provide an agency-wide risk management function. However, it has not fully established the governance structure and practices to ensure that its information security decisions are aligned with its mission. ” -- states the report.

The failure of the FAA approach to cyber security could result in the exposing to serious risks for traffic control operations and final users.

According to the GAO, the FAA didn’t consistently control access to NAS systems and resources. It is crucial to implement urgently such controls to avoid serious consequences, including data leakage, system intrusions and data breaches. Specifically, the GAO wants to see enhanced authentication, more authorization controlling access to resources, cryptography implementations and audit and monitoring procedures put in place.

GAO stresses the FAA to enhanced processes for identification and authentication of users to the resources of the NAS.

“ Without adequate access controls, unauthorized users, including intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or for personal gain. In addition, authorized users could intentionally or unintentionally modify or delete data or execute changes that are outside of their authority.” continues the report.

According to the report, FAA did not always ensure that sensitive data were protected when stored and transmitted with the adoption of encryption mechanisms as requested by the NIST guidance

The GAO report also remarks the absence of efficient audit and monitoring processes. Auditing and monitoring are essential processes to analyze auditable events and detect anomalous activities.

“Automated mechanisms may be used to integrate audit monitoring, analysis, and reporting into an overall process for investigation of and response to suspicious activities.” reads the report.

Resuming the FAA did not implement an efficient Information Security Program, exposing resources, users and the overall NAS environment for cyber threats.

GAO announced the release of a separate report with limited distribution, the document will include 168 recommendations to address 60 findings. These recommendations consist of actions to implement and correct specific information security weaknesses related to access controls and configuration management.

“These recommendations consist of actions to implement and correct specific information security weaknesses related to access controls and configuration management"


This was cross-posted from the Security Affairs blog.

Copyright 2010 Respective Author at Infosec Island]]>
PlugX Malware Adopts New Tactic in India Attack Campaign Tue, 03 Mar 2015 13:43:39 -0600 The minds behind PlugX have added a new twist to the malware to make it stealthier.

According to Sophos, the malware is now hiding the malicious payload in Windows registry instead of writing the file on disk. The change underscores the continued development of the malware, which has been linked to a number of advanced persistent threat (APTs) campaigns. In recent months, a PlugX variant has also been spotted with a peer-to-peer communication channel as well.

"Malware hiding components in registry is not a revolutionary idea; we have seen that before," Sophos researcher Gabor Szappanos noted in a new paper on the malware. "Most notably the recent Poweliks Trojan...stored the active script component in the registry. Even some of the APT malware families, like Poison or Frethog, occasionally used the registry as storage for the main payload. There were precursors even within the criminal groups distributing PlugX: they used this method back in 2013 in a couple of cases for storing the Omdork (a.k.a. Sybin) payload. So it was only a question of when the same would happen to the main PlugX backdoor."

The first sample using the tactic was distributed at the end of January. Based on the version dates, the development of these new variants happened earlier that month, he told SecurityWeek.

Read the rest of this story on



Copyright 2010 Respective Author at Infosec Island]]>
DARPA’S Memex Project Shines Light on the Dark Web Tue, 03 Mar 2015 12:28:57 -0600 To better combat the increasing use of the Dark Web for illegal purposes, DARPA, the U.S. military’s Defense Advanced Research Projects Agency, is building a search engine known as Memex for law enforcement use.

Google and Yahoo only index five percent of the Internet, the “Surface Web.” The remaining “Deep Web” is unstructured data from sensors and other devices, temporary pages, or hidden behind password protection that makes it hard for conventional search engines to index.

A smaller portion of the missing 95 percent is the “Dark Web,” sites only accessible through specialized browsers and networks such The Onion Router (Tor) and increasingly being used for sex, drugs, and other illegal activities.

According to Scientific American, Memex currently includes eight open-source, browser-based search, analysis and data-visualization programs as well as back-end server software that perform complex computations and data analysis.

“We’re envisioning a new paradigm for search that would tailor indexed content, search results and interface tools to individual users and specific subject areas, and not the other way around,” said Chris White, DARPA program manager, in a press release.

“By inventing better methods for interacting with and sharing information, we want to improve search for everybody and individualize access to information. Ease of use for non-programmers is essential.”

The resulting Dark Web crawler has been mapping the Tor-accessible and peer-to-peer only sections of the larger Internet. The resulting size has surprised many.

The Dark Web had been assumed to be small, only about a thousand pages, yet Memex has already found between 30,000 and 40,000 pages, with estimates of around 70,000 pages in total size predicted.

“Just finding these pages and seeing what’s on them is a new aspect of search technology,” White told Scientific American.

The goal is to one day connect Memex to regular browser-based software such as Firefox or Chrome that law enforcement agencies and the general public would typically use. This next step would allow law enforcement to access the software from any Internet-connected device, including mobile devices.

As reported on a recent 60 Minutes report, Memex is currently being beta tested by law enforcement to identify sex rings.

Why focus on sex crimes? According to the United Nations Office on Drugs and Crime there are roughly 2.5 million human trafficking victims worldwide. Therefore tracking and prosecuting the purveyors is a top law enforcement priority.

Additionally profits from such activities have been used to fund actions against our national security, White told 60 Minutes.

A typical sex ring investigation begins with scant few pieces of information, such as a single e-mail address. In a demonstration, White plugged an example address into Google and received a page of links from the portion of the Surface Web, the part of Internet that Google crawls.

By clicking through each of the search results, an investigator might find an additional piece of information, say, a phone number associated with the single e-mail address.

The idea behind Memex comes from a 1945 article by Vannevar Bush, director of the U.S. Office of Scientific Research and Development (OSRD) during World War II, according to The Atlantic Monthly.

The original Memex was proposed as an analog computer designed to supplement human memory and automatically cross-reference all of the user’s books, records and other information.

The modern Memex analyzes and graphically represents all known sites (including those within the Dark Web) related to the initial email search query, saving investigators valuable time and effort.

Scientific American cited one detective in Modesto, CA, who used a companion piece of software from Carnegie Mellon University called Traffic Jam to follow up on a tip about one particular victim from Nebraska.

That investigator was able to identify a sex trafficker traveling with prostitutes across the Midwest and West.

Researchers at Carnegie Mellon are also studying ways to apply computer vision to Memex searches. This will allow law enforcement to identify images with similar elements—such as furniture from the same hotel room—even if the images themselves are not strictly identical.

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Killed by AI Much? A Rise of Non-deterministic Security! Tue, 03 Mar 2015 11:41:08 -0600 Remember [some] NIDS of the 1990s? Specifically, those that were unable to show the packets that matched the rule triggering the alert! Remember how they were deeply hated by the intrusion detection literati? Security technology that is not transparent and auditable is … what’s the polite term for this? … BAD SHIT!

My research into security analytics and Gartner recent forays into so-called “smart machines” research converge in this post. Hilarity ensues!

Today we are – for realz! – on the cusp of seeing some security tools that are based on non-deterministic logic (such as select types of machine learning) and thus are unable to ever explain their decisions to alert or block. Mind you, they cannot explain them not because their designers are sloppy, naïve or unethical, but because the tools are build on the methods and algorithms that inherently unexplainable [well, OK, a note for the data scientist set reading this: the overall logic may be explainable, but each individual decision is not].

For example, if you build a supervised learning system that can look at known benign network traffic and known attack traffic (as training data), then extract the dimensions that it thinks are relevant for making a call on which is which in the future will NEVER be able to fully explain why the decision was made. [and, no, I don’t believe such a system would be practical for a host of others reasons, if you have to ask] Sure, it can show you the connection it flagged as “likely bad”, but it cannot explain WHY it flagged it, apart from some vague point like “it was 73% similar to some other bad traffic seen in the past.” Same with binaries: even if you amass the world’s largest collection of known good and known bad binaries, build a classifier, extract features, train it, etc – the resulting system may not explain why it flagged some future binary as bad [BTW, these examples does not match any particular vendors that I know of, and any matches are purely coincidental!]

My dear security industry peers, are we OK with that? Frankly, I am totally fine with ML-based recommendation engines (what to buy on Amazon? what to watch on Netflix?) – occasionally they are funny, sometimes incorrect, but they are undoubtfully useful. Can the same be said about the non-deterministic security system? Do we want a security guard that shoots people based on random criteria, such as those he “dislikes based on past experiences”, rather than using a white list (let them pass) and a black list (shoot them!)?

One security data scientist reminded me recently that “fast / correct / explainable – pick any two” wisdom applies to statistical models pretty well, and those very models are now creeping into the domain of security. Note that past heuristics and anomaly detection approaches, if complex, are substantially different from this coming wave of non-linear machine logic. You can still do the those old anomaly detection computations “on paper” (however hard the math), and come to the same conclusion as the system – but not with today’s ensemble learning (ha-ha, my candidate model just beat up your champion model!) where the exact decision logic is machine-determined on each occasion, for example.

By the way, my esteemed readers know that all of my work focuses on reality, not marketing pipe dreams and silly media proclamations (remember the idiot who said “Cyber security analytics isn’t particularly challenging from a technical perspective”?). I assure you that this concern is about to become a real concern!

When asked about this issue, designers of security tools that substantially rely on non-deterministic logic offer the following bit of advice: build trust over time by simply using the system. In essence, don’t push the system to any blocking [or “waking people up at 3AM”] mode until you trust it to be correct enough to whatever standard you hold dear. Do you think this is sufficient, in all honesty? Sure, some people will say “yes” – after all, most users of AV tools do not manually inspect all the anti-malware signatures, choosing to trust the vendor. But it is one thing to trust the ultimately-accountable vendor threat research team, and quite another to trust what is essentially a narrow AI.

This was cross-posted from the Gartner blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The Malicious Insider Tue, 03 Mar 2015 08:28:59 -0600 By: Irfahn Khimji 

Financial gain or fraud was the primary driver of the 11,698 instances of insider privilege abuse – defined as any unapproved or malicious use of organization resources – in last year’s Verizon Data Breach Investigations Report.

 Source: 2014 VDBIR 

A malicious insider can be detected in a number of ways, and there are both non-technical and technical indicators of risk. Some non-technical indicators include:

  • The not-so model employee – This individual has been consistently the first in and last out of the office lately. There’s a lot of work to do so this individual is pulling some long hours. If there isn’t a project due soon, this could be an indicator that the individual is working on a little extra-curricular malicious work.
  • The Ironman streak – An individual who hasn’t taken a vacation in a long time may not have had the opportunity to share his or her work with others. If they are keeping their work to themselves without collaborating or having others review, there is a chance that their project is of that extra-curricular malicious nature.

Both of these indicators lead back to the same root. The individual is trying to hide their malicious intentions from others, and exploiting the trust gained within the organization for their own malicious intent.

Technical indicators are a little easier to identify. Rules can be setup in your SIEM, DLP, etc. to detect these indicators. Some of the technical indicators include:

  • Increased number of logins varying in remote and local
  • Logging into the network at odd times
  • Change in websites visited from work to personal
  • Increased export of reports and downloads from internal systems
  • Unauthorized cloud storage sites being accessed regularly

The answer to this question, unfortunately, isn’t a simple one. There could be a variety of factors which contribute to the Recruitment or Tipping Point of an individual as described in the Insider Threat Kill Chain. This is the point where the insider turns from good to bad.

  • An outsider promise – An outsider, such as a competitor, could provide a financial incentive for an insider to turn rogue. As a reward for siphoning out information, the competitor could promise employment within their company.
  • A life changing event – If the individual has a tendency to keep to themselves, others may not know of a life changing event. This may not directly cause an individual to turn rogue, however, combined with an outsider promise, they may think they have nothing to lose.
  • Two weeks notice – An employee who is about to leave the company may want to take something with them on the way out. This could be customer information, trade secrets, or anything that may be useful in a future role. This is an unethical act and often (if it isn’t, it should be) against company policy.
  • Stagnancy – An employee who was recently disciplined or perhaps passed over for a promotion or raise may be looking for revenge. There is potential for this type of an individual to reach their tipping point and become a malicious insider.

Have you heard the saying: “An employee doesn’t quit a job, they quit their boss,” or “A person who feels appreciated will always do more than what is expected.”

There is a lot of truth to these when it comes to reducing the risk of an employee turning into a malicious insider. If an individual feels appreciated and enjoys what they do, they will ensure they do a good job. If an individual enjoys and cares about working for their company, they will ensure they protect the secrets of that company. Not only does an individual’s success become the success of the company, but the success of the company becomes the individual’s success.

From a non-technical perspective there are several precautions that can be taken to detect and prevent insider threats:

  • Consider threats from insiders in risk assessments
  • Ensure background checks are conducted on all new hires
  • Clearly document and enforce policies and controls
  • Conduct security awareness training for all employees
  • Monitor and respond to suspicious behavior
  • Anticipate and manage negative workplace issues
  • Establish clear lines of communication and procedures between human resources, legal and the IT teams

From a technical perspective some of the controls that can be considered are:

  • Implement strict password and account policies
  • Enforce separation of duties, least privilege access and data classification
  • Track the use of privileged accounts
  • Implement system change controls and an approval process
  • Deactivate access following termination or modify access when a role is changed
  • Log, monitor and audit employee network activities
  • Malicious insiders are definitely a threat that can affect any organization.

If you are in the Vancouver area and are planning on attending BSides Vancouver, Tripwire Security Analyst Ken Westin and I will be speaking about insider threats, including case studies and the ways of detection. Hope to see you there!

This was cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
PCI DSS 3.0 Updates and Ramifications for Network and Application Security Mon, 02 Mar 2015 12:44:54 -0600 By: Neeraj Khandelwal 

The PCI DSS 3.0 is here. Since Jan 1, 2015 organizations under its purview are required to comply with the updated standard. Many of the changes stem from the recent high profile breaches, despite being compliant. In this post we will concern ourselves mainly with the impact that 3.0 poses to application and network security.

Certain requirements in sections 6 and 11 have an extended deadline of June 30, 2015. The amount of effort needed to meet the updated standard could be substantial and should not be underestimated.

For context, PCI DSS compliance is becoming increasingly important, as losses due to global card fraud continue to escalate:



Philosophically, the updates and new requirements stem from the following observations:


  • Vulnerabilities in third party software, as well as in service providers or contractors pose significant risk to the cardholder environment
  • Perimeter breaches are common and securing within the perimeter is as important as the perimeter itself
  • Pentests need to meet industry-standard methodology
  • Attackers are succeeding due to a lack of proper network segmentation and access controls

Below is a section-wise discussion on the relevant changes.

6.3 Secure development guidelines are applicable to internal as well as bespoke software

Bespoke software refers to all the custom software built by yourselves or a third party. You might be adhering to a secure development process for your own code, but if you source some of it from outside, and if they are not threading security into their SDLC, then they become the weakest links. While this one makes sense, the challenging part is fixing issues in outsourced code, especially when the contract has ended or expertise is lost.

6.5 Updated developer training around common coding vulnerabilities, and to understand how sensitive data is handled in memory

The motivation for doing this is easily traced back to all the POS memory scraping attacks which have hogged the news of late (e.g. Target). While this is a great guideline for software being developed, it could be challenging to review and fix the software programs in production.

6.5.10 Broken authentication and session management

Examples include flagging cookies as secure, not exposing session IDs in URLs and ensuring session IDs are not predictable and time out. The Barracuda Web Application Firewall secures session IDs and cookies by default (using signing or encryption) as well as against session riding and CSRF attacks.

Session IDs in URLs can be hard to fix within the code, but are easily secured using URL Encryption with the Barracuda Web Application Firewall.

While section 6.6 remains largely unchanged, it adds a minor clause to allow for WAFs in monitoring mode. As Gartner’s Anton Chuvakin points out, this is not good, and we agree. This is probably borne out of a desire to avoid false positives, but this effectively prioritizes convenience over security. As anyone who has dealt with a production WAF would know, a WAF sees a lot of attacks. Manual processing of such alerts can introduce delays that can be easily exploited by advanced attackers. The industry best practice is to deploy the WAF in active protection mode and we recommend this.

11.3 Penetration testing should be based on industry-accepted approaches (for example, NIST SP800-115)

Section 11 deals with scanning and penetration testing to evaluate the efficacy of security controls. According to the Verizon PCI Compliance Report 2014, section 11 remains the least met requirement. Just 13.2% of organizations were compliant with it in the last year. One of the reasons for this was that definition and scope of pentesting had been left open to interpretation. This was exploited by shoddy pentesting firms that passed off scanner reports as final pen tester’s report to fill in a check box.

Not so in 3.0. 11.3 provides several updates that are a best practice until June 30, 2015, after which they become a requirement. The first one of these ensures that pen testing confirms to an industry standard and is not a check box formality.

11.3: Pentests should include the CDE perimeter and testing from both inside and outside the network. Application layer pentests must include requirements in 6.5 (e.g. OWASP Top 10).

From an application security perspective, the ramifications are significant. Earlier, you could get away by pentesting your public facing applications only. But now, the scope of the testing explicitly calls out testing within the perimeter.

While this may sound onerous (especially to SMEs), it does make sense. Perimeter breaches are common. Be it an attacker, insider or a contractor, once they are inside, internal applications that have access to cardholder data are prime targets. Their attack surface needs to be as secure as public-facing applications, if not more.

11.3.1/11.3.2 Internal and External pentests need to be done at least annually, and after any significant infrastructure or application upgrade or modification

So, for example, if this was applicable in 2014, you could be redoing the pentests after each of the heartbleed, shellshock, winshock, and poodle attacks which involved substantial patching activity, apart from other significant changes to your our applications.

11.3.4: Includes testing to validate any segmentation and scope-reduction Controls

In the past, attacks on retailers have been succeeding because the networks weren’t segmented properly, so for example the attackers were able to get from HVAC to the POS networks and exfiltrate data out to the Internet.

Having strong segmentation, strict host access controls and application-aware network controls are a logical guidance. If you have not explored the Barracuda NG Firewall features, now is the time to do so.

SMEs under the Gun

Large organizations have dedicated information and application security teams and to some extent they have always been pentesting within the perimeter, often adhering to an industry-specific pentesting methodology such as Penetration Testing Execution Standard. Network segmentation and host access control is also a common architecture found in enterprises. So they will get to 3.0 compliance faster, despite being larger.

However, the brunt of this will be on the SMEs, with their limited budgets and information security staff. They will need help in identifying the CDE, assessing the scope, possibly hiring new QSAs and updating their security solutions and posture across the board – including within the perimeter. Re-architecting their network environment to adhere to the new guidelines also entails significant investments.

The worst thing an SMB can do is to underestimate the scope and challenge of these new directives. The directives make sense from a security perspective, but will require significant investments of time and resources for them to be put in place.

SMEs would also do well by being wary of traditional large security providers who really do not focus on their niche needs and show an inconsistent interest in their markets. They would do well to seek out vendors that provide differentiated, turnkey security solutions that are architected from the ground-up with the midmarket in mind, solutions that are cost-effective, easy to use and administer without requiring an army of professional support personnel and provide a quick time to value.

This was cross-posted from the Barracuda Networks blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Is Visual Hacking Undermining Your Enterprise Security? Mon, 02 Mar 2015 11:01:26 -0600 While companies are spending millions to secure critical networks, little is being done to prevent the open display of sensitive and proprietary information.

A new study conducted by the Ponemon Institute reveals how easy it is to undermine enterprise security with low-tech visual hacking operations, where nine out of every ten attempts (88%) were found to be successful.

The study employed a security expert who specializes in penetration testing who visited the offices of eight enterprises while posing as a temporary worker and made attempts to visually hack confidential information simply by strolling through the office looking for sensitive data that was in plain sight on desks and computer screens.

The operative was able to take possession of business documents flagged as being as confidential, and also used a mobile device to take images of confidential information displayed on a computer screens, all in the course of carrying out his prescribed duties.

The study shows that sensitive data could be compromised by hackers posing as service personnel, vendors, cleaning and maintenance staff – anyone who has access to areas where confidential information is present.

“In today’s world of spear phishing, it is important for data security professionals not to ignore low-tech threats, such as visual hacking. A hacker often only needs one piece of valuable information to unlock a large-scale data breach,” said Larry Ponemon.

“This study exposes both how simple it is for a hacker to obtain sensitive data using only visual means, as well as employee carelessness with company information and lack of awareness to data security threats.”

Key findings in the study show that:

  • Visual hacking happens quickly: Companies can be visually hacked in a matter of minutes, with 45 percent occurring in less than 15 minutes and 63 percent of visual hacks occurring in less than a half hour.
  • Visual hacking generally goes unnoticed: In 70 percent of incidences, a visual hacker was not stopped by employees – even when using a cell phone to take a picture of data displayed on a screen. In situations when a visual hacker was stopped by an employee, the hacker was still able to obtain an average of 2.8 pieces of company information (compared to 4.3 when not stopped).
  • Multiple pieces of sensitive information were able to be visually hacked. During the study, an average of five pieces of information were visually hacked per trial, including employee contact lists (63 percent), customer information (42 percent) and corporate financials (37 percent), employee access & login information/credentials (37 percent) and information about employees (37 percent) during any given hack.
  • Unprotected devices pose the greatest opportunity for sensitive information to be visually hacked. 53 percent of information deemed sensitive (access or login credentials, confidential or classified documents, financial, accounting or budget information or attorney-client privileged documents) was gleaned by the visual hacker from the computer screen, greater than vacant desks (29 percent), printer bins (9 percent), copiers (6 percent) and fax machines (3 percent) combined.
  • Open floor plans pose a greater threat to visual privacy. In experimental trials completed in companies with an open-office layout, an average of 4.4 information types were visually hacked, while those conducted in a traditional office layout saw 3.0 information types visually hacked.
  • Unregulated functional areas were the most likely to experience a visual hack. On average, customer service roles consistently saw the highest number of visual hacks at 6.0, with communications at 5.6 and sales force management 5.2. Regulated functional areas like accounting & finance saw lower averages at 1.9, and legal at 1.0 experienced the least.

Conversely, the study also showed that companies can diminish the threat of visual hacking greatly by putting controls in place, such as implementing clean desk policies, document shredding, and through awareness training.

The research also showed that companies that require the use of privacy filters on computer screens in 50% of trials had three or less information types visually hacked, while 43% of companies that did not use privacy filters saw four or more information types visually hacked.

“Visual privacy is a security issue that is often invisible to senior management, which is why it often goes unaddressed,” says Mari Frank of the Visual Privacy Advisory Council.

“This study helps to emphasize the importance of implementing a visual privacy policy, educating employees and contractors about how to be responsible with sensitive data they are handling, as well as equipping high-risk employees with the proper tools, such as privacy filters, to protect information as it is displayed.

More information on the study is available here.

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
What is a Level 3 Merchant? Mon, 02 Mar 2015 10:48:48 -0600 This consistently keeps coming up as an issue because of the confusing definitions on the Visa, MasterCard and Discover Web sites.

“Merchants processing 20,000 to 1 million Visa e-commerce transactions annually”
“Any merchant with more than 20,000 combined MasterCard and Maestro e-commerce transactions annually but less than or equal to one million total combined MasterCard and Maestro e-commerce transactions annually”
“All merchants processing between 20,000 and 1 million card-not-present only transactions annually on the Discover network”
In my opinion, the reason for the confusion is that definitions only mention eCommerce or card-not-present (CNP) payment transactions and no other payment channels. As a result, people think that other payment channels do not count for Level 3 merchants or that Level 3 merchants only do business through eCommerce or CNP payment transactions.

I have even encountered merchants that argue that they are exempt from PCI compliance because their organization does more than 20,000 eCommerce or CNP payment transactions but they also process payment transactions through other payment channels but, in total, have less than 1 million payment transactions. Some people will argue any point to avoid PCI compliance.

So if this is not true, exactly what is a Level 3 merchant?

Based on training and from discussions with the card brands over the years, Level 3 merchants have 20,000 or more eCommerce or CNP payment transactions, but cannot exceed 999,999 payment transactions from all payment channels combined.

As examples:

A pure eCommerce merchant with no other payment channels can conduct up to 999,999 payment transactions through their Web site and be considered a Level 3 merchant.
A merchant with 20,000 or more eCommerce or CNP payment transactions that also has one or more of the following; brick and mortar, mail order, telephone order or other payment channels, cannot exceed 999,999 payment transactions from all of their payment channels to be considered a Level 3 merchant.
If an organization exceeds a total of 999,999 payment transactions from all their payment channels they are, by definition, classified as a Level 2 merchant. If the merchant has fewer than 20,000 eCommerce or CNP payment transactions, then they would be classified as a Level 4 merchant.

Hopefully we all now understand the definition of a Level 3 merchant.

This was cross-posted from the PCI Guru blog. 

Copyright 2010 Respective Author at Infosec Island]]>
SIEM/DLP Add-on Brain? Mon, 02 Mar 2015 10:37:00 -0600 Initially I wanted to call this post “SIEM has no brains”, but then questioned such harshness towards the technology I’ve been continuously loving for 13 years :-) In any case, my long-time readers may recall this post called “Pathetic Analytics Epiphany!” (from 5 years ago) [and this one from 8] where I whine incessantly about the lack of actual analytic capability in our log analysis tools such as SIEM and log management (“search and rules? that’s it?! WTH!”). Well, I didn’t just whine, I tried to do something about it.

Now in 2015, the situation has started to change … but sadly not much in SIEM tools themselves. The good news is that we now a decent number of vendors that offer, essentially, an add-on brain for your SIEM. Some can also add a brain to your DLP, since it turned out that DLP is pretty brainless as well…

These new vendors (whether they are classified by us as UBA or not) essentially focus on 2 problems:

Reduce/refine/prioritize alerts coming from other tools (SIEM, DLP) by applying algorithms to the SIEM output i.e. alert flow (e.g. this is a rare type, this is not common for this source, this has never triggered at night, etc)
Detect better and/or detect new, difficult threats by applying algorithms to the SIEM input and/or fusing the SIEM data with other data
On the detection side, they may be effective with things like:

One example – many people’s favorite! – is finding compromised user accounts. You can read about it here – to compare “the SIEM way” and “the analytic way” (UBA way, in this case)
Another is malicious domain detection (DGA-cooked domains detection, in particular) – sure, you can rely on threat intelligence feeds of “bad domains”, or you can ML the sucker like these folks do here (sure, “the known bad way” is easier, but guess how helpful it would be for finding freshly created bad domains?)
Exfiltration detection via interesting channels (DNS for exfil, anybody?) has been on the target list for those “add-on brains”; after all, humans tend to burn out after looking for outbound firewall logs non-stop for a weak or two :–)
[note that the above 3 items are devilishly hard to do with just SIEM alone – and impossible with a SIEM NOT crewed by a skilled and motivated SIEM team!]

So, why isn’t this in all SIEM tools?! [A short answer is: beats me …]

Longer answer: There are political and economical reasons, but there are also performance engineering reasons. Remember the old days of SIEM when a SQL query hitting the database to run a report “Top User Logins by Count For Last 7 Days” took 3 hours? [“Stop complaining, its Oracle, man!” was the typical response.] Now try something more 2015-style, such as “match a set of 3000 threat indicators [such as IPs or domains] vs historical log data over the last 60 days” (we call it “TI retro-matching” and it is useful a long list of reasons) – BOOM! down goes your SIEM. And this was trivial matching – not analytics, really. A lot of SIEM deployments are spec’d out to run close to maximum performance (more useful discussion here); you can think of this as sort of SIEM coffin corner: go faster – airframe breaks, go slower – become a big drag on the budget and crash. If I want to use your SIEM for any unsupervised learning, as I did back in the day, you will need to dramatically increase hardware – or wait a looooooooooooooooooooong time for results.

Finally and to be fair, some SIEM vendors (very few!) are starting to think about it, but – seriously, guys – you could have totally owned this 5 years ago! BTW, I’ve been asking many of those emerging analytics-focused vendors (whether UBA or not) on “why don’t you think that SIEM vendors will crush you?” and received many exciting answers …

This was cross-posted from the Gartner blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Prohibiting RC4 Cipher Suites Thu, 26 Feb 2015 15:16:44 -0600 By: Tyler Reguly

If you’ve been following the drafts of this RFC, then nothing here will surprise you. The first draft was published on July 21, 2014, and, a short seven months later, RFC 7465 has been published. It’s a great idea for an RFC that I’d like to see used more frequently, but more on that in a moment.

If you’re unfamiliar the term RFC, it stands for Request for Comments and the RFC collection (published by the IETF) describes specifications and protocols related to networking and the Internet. HTTP and SMTP, for example, both have RFCs that describe how they work.

Think of RFCs as a blueprint for implementation that anyone can follow. I’ve long had a fascination with RFCs and as long as I’ve had an iPad, I’ve carried a full set of RFCs for casual reading; I’d love to rebrand RFC to ‘Really Freaking Cool.’

Now that we’ve covered the background, let’s focus on RFC 7465, entitled ‘Prohibiting RC4 Cipher Suites.’ The document’s abstract very clearly spells out its intention:

This document requires that Transport Layer Security (TLS) clients and servers never negotiate the use of RC4 cipher suites when they establish connections.
The emphasis is mine, but it’s important to note the use of the word ‘never’. In the official body of the RFC, this language becomes ‘MUST NOT.’

The short version of the document is that to be RFC7465 compliant, you cannot use RC4 in both clients and servers. I’ve long believed that RC4 was dead based on past research and multiple vendors have already declared it dead. Seeing a standards body willing to issue a document that reflects this opinion is awesome.

So, if you haven’t yet, why not sit down and disable RC4 today?

This was cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Control System Cyber Security and the Insurance Industry Thu, 26 Feb 2015 11:09:59 -0600 I have felt that the insurance companies can be a major player in driving the need to adequately secure control systems.  Consequently, when I was asked the following questions by Advisen ( for their Cyber Risk Network newsletter, I felt that could be a valuable venue to get the message out to people who may not be familiar with control system cyber security. Advisen reaches more than 150,000 commercial insurance and risk professionals at 8,000 organizations worldwide. I will be on a panel at the Advisen Risk Conference in San Francisco March 3rd.

Response to Advisen:

Industrial control systems are the computing systems that monitor and control physical processes in electric substations, power plants of all types, refineries, pipelines, water and waste water systems, chemical plants, manufacturing facilities, transportation, building control systems, and even medical systems.

Question: What do you see as the greatest cyber risks industrial companies face today?

In my opinion, the most important risk that most companies currently face is the lack of adequate understanding and commitment to address control system cyber security by senior management. Control system cyber security is about cyber securing physical processes to “keep lights on and water flowing” not identity theft or industrial espionage. Without senior management commitment, it will be very difficult to adequately secure control systems. Moreover, securing control systems is different than securing business IT systems. A major threat to the reliability and safety of control systems are IT organizations using inappropriate technologies, policies, and testing to “secure” control systems. Another issue that impacts the cyber security of control systems is the compliance mindset. The North American electric and U.S. nuclear industries are focused on compliance (checking the box) rather than adequately securing the electric systems and nuclear plants against many known cyber threats.

Question: What are the emerging risk issues to industrial control systems?

In my opinion, there are several levels of risk. The first is unintentional cyber incidents. Unintentional cyber incidents have caused very significant impacts including destruction of large equipment, environmental discharges, and even deaths. Because unintentional cyber incidents aren’t malicious targeted attacks, the impacts are generally localized to the specific facility. With the movement to the “Internet of Things” and installing cyber-sensitive technologies, there may be more and more unintentional control system cyber incidents that may not be localized.

Malicious, though untargeted cyber attacks include “viruses and worms” that can affect control systems when control systems are connected to corporate networks, the Internet, or third party networks. This is where the concept of the “Internet of Things” can be such a cyber threat enabler.

In my opinion, the most frightening risks are nation-states such as Iran or North Korea deciding to cyber attack our infrastructures - and they have the capability to do that.

Question: Is the insurance industry doing enough to adequately address control system cyber risks?

In my opinion, the answer is no. I have found securing control systems often is not well understood by many insurance companies. There are two aspects of securing control systems that can affect insurance companies. If understood, insuring secure control systems can be a new revenue stream (the positive). On the other hand, insuring companies with inadequately secured control systems can be lead to major insurance company liabilities on the order of hundreds of millions of dollars (the negative). Accepting control system cyber compliance rather than actual security will not lessen the potential liabilities to the insurance industry.

Question: What keeps you awake at night?

What keeps me awake is the general lack of understanding about control system cyber security by decision makers and the consequent inappropriate decisions made that can affect the cyber security and reliability of control systems. Much of our critical industrial infrastructures are effectively open to hackers. The damage can be devastating to our country and economy.

Question: In your opinion, what is the single most important control system cyber risk development in the past 12 months?

In my opinion, the single most important control system cyber risk are hackers and nation-states realizing our critical infrastructures can be cyber targets and the accompanying lack of appropriate attention by senior management to these threats.

This was cross-posted from the Unfettered blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Open Haus: Wi-Fi and Seamless Roaming for Mobile Workers Thu, 26 Feb 2015 10:18:11 -0600 When you hear the term “mobile worker,” what image comes to mind? Is it the employee who is constantly taking his laptop into different corners of the office, working from their desk, conference rooms and couches? Or is it the “road warrior” executive who works from airports, trains, cafés, hotels and anywhere else she can find a Wi-Fi or 3G/4G connection?

Whatever you picture, the fact is that mobility is now a key expectation of many employees. Those who work from laptops, tablets and other mobile devices need to be certain that the technology they depend on is able to follow them from place to place, without any service interruption.

As an example, remote workers often use a VPN to securely connect to their corporate network, no matter their location. But what happens if their network connection changes? Imagine an employee who works on her laptop while commuting by train, but constantly loses her Wi-Fi connection as she travels. You’d think that every time the network connection switches between Wi-Fi and 4G, she would need to log into her VPN. The employee would get frustrated and not be nearly as productive.

To avoid this scenario and others that impede mobile working, NCP engineering developed two key additions to its Remote Access VPN solution – Wi-Fi roaming and seamless roaming. With these features, the VPN tunnel connection is constantly maintained without disrupting the user’s computing session, even if their network connection changes.

Here’s how these two features enhance NCP engineering’s Remote Access VPN solution:

Wi-Fi Roaming

Say a remote worker moves within the range of several wireless access points using the same SSID. Without Wi-Fi roaming, the user would have to set up a new data connection and log into the gateway, again and again, to maintain the VPN connection.

But with NCP’s VPN Clients managing the network connection, the system roams access points within a company network as the user changes locations and IP addresses, and automatically chooses the strongest access point available. The applications that communicate via a tunnel do not even “notice” the access point roaming process, allowing for continuous, uninterrupted, secure remote access.

Seamless Roaming

Seamless roaming is the logical advancement from Wi-Fi roaming, in that it facilitates transitions when a user moves between different networks, not just within the same Wi-Fi area. With seamless roaming, the VPN client automatically selects the optimal connection medium, and then as devices move between Ethernet LAN, Wireless LAN (Wi-Fi) or cellular connections (3G and 4G), the user does not have to do any additional work to maintain the VPN connection.

This feature enables the user’s device to remain “always on,” without any disruption to the applications of the mobile telework station. It also enables the client to automatically change the communication medium during a session and to dynamically redirect the VPN tunnel, without the user noticing. This will be a very important feature in connected cars as they become more prominent.

Technology to Support Mobility

As workplaces become increasingly flexible and dispersed, technology to minimize interruptions to productivity must become more agile than ever before. With Wi-Fi and seamless roaming integrated into a company’s remote access solution, workers won’t have to choose between mobility and productivity.

<Open Haus is a monthly series that explores the key features of NCP’s Remote Access VPN>

This was cross-posted from the VPN Haus blog. 

Copyright 2010 Respective Author at Infosec Island]]>