Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 New Zero-day in Microsoft OLE Being Exploited in Targeted Attacks Wed, 22 Oct 2014 13:28:25 -0500 Security experts at Google and McAfee have discovered a new zero-day vulnerability in Microsoft OLE being exploited in targeted attacks.

Early this week,  Microsoft issued the security advisory 3010060 to warn its customer of a new Zero-Day vulnerability that affects all supported versions of Windows OS except, Windows Server 2003.

The OLE Packager is the component that is affected by the zero-day, which was discovered by researchers at McAfee and Google. Curiously the component was just patched this month in MS14-060, but Microsoft, in response to this latest flaw, has released a Fix It package for PowerPoint and encouraged the use of EMET 5.0.

The most concerning thing related to the Microsoft zero-day flaw is that it is already being exploited by threat actors in targeted attacks.

“The vulnerability could allow remote code execution if a user opens a specially crafted Microsoft Office file that contains an OLE object. An attacker who successfully exploited the vulnerability could gain the same user rights as the current user,” the advisory explained.”At this time, we are aware of limited, targeted attacks that attempt to exploit the vulnerability through Microsoft PowerPoint.” confirming the voice that bad actors are already exploiting the zero-day in limited cases.

The OLE (Object Linking and Embedding) is a proprietary technology developed by Microsoft that allows embedding and linking to documents and other objects. As explained by the experts at Microsoft, the vulnerability in Microsoft OLE, coded as CVE-2014-6352, could allow remote code execution. This is possible if a Microsoft user opens a specially crafted Microsoft Office file that contains an OLE object.

The file could be sent via email to the victims in a classic spear-phishing attack or the attacker could serve it through a compromised website in a classic watering hole attack.

The security advisory reports the following mitigation factors:

  • In observed attacks, User Account Control (UAC) displays a consent prompt or an elevation prompt, depending on the privileges of the current user, before a file containing the exploit is executed. UAC is enabled by default on Windows Vista and newer releases of Microsoft Windows.
  • An attacker who successfully exploited this vulnerability could gain the same user rights as the current user. Customers whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
  • In a web-based attack scenario, an attacker could host a website that contains a webpage that contains a specially crafted Office file that is used to attempt to exploit this vulnerability. In all cases, however, an attacker would have no way to force users to visit these websites. Instead, an attacker would have to convince users to visit the website, typically by getting them to click a link in an email message or Instant Messenger message that takes users to the attacker’s website.
  • Files from the Internet and from other potentially unsafe locations can contain viruses, worms, or other kinds of malware that can harm your computer. To help protect your computer, files from these potentially unsafe locations are opened in Protected View. By using Protected View, you can read a file and see its contents while reducing the risks. Protected View is enabled by default.

The principal problem is that despite the exploit of the flaw trigger a warning, users often ignore them, the issue appears very serious in corporate environments, where executives and remote users are often granted administrative rights on their systems.

This was cross-posted from the Security Affairs blog.

Copyright 2010 Respective Author at Infosec Island]]>
6 Actions Businesses Should Take During Cyber Security Awareness Month Wed, 22 Oct 2014 11:24:04 -0500 October is National Cyber Security Awareness Month. It would seem the breaches announced virtually every day of this  month so far were orchestrated to highlight the need for organizations to beef up their information security efforts and improve their controls.

Sadly instead, cyber incidents seem to have become de rigueur these days. Consumers are getting fed up, and government agencies are proposing more laws. The tide is turning, and soon organizations will be held accountable for more effectively protecting their systems and information, or they will likely face much steeper fines and penalties than ever before. So, now’s the time to take action! Here are six actions you to take this month to start improving your organization’s information security program and associated efforts.

1.    Review your authentication methods.

When was the last time you updated the way your legacy and older systems and applications authenticate user accounts? Do you still use just a password, that isn’t required to be strong? Do you engineer new systems and applications using these same weak methods? Now is the time to improve your authentication methods.

TO DO: Implement two-step authentication wherever possible and require strong passwords.

2. Apply security updates to all your systems and applications.

Are you up-to-date with all your security patches and systems updates? Cyber crooks look for systems that have old vulnerabilities. Plus, those vulnerabilities can allow bad things to happen as a result of mistakes and interactions with other applications and systems. You are a digital sitting duck if you don’t stay on top of security updates. Case in point: Have you updated your OpenSSL to remove the Heartbleed vulnerability? Do it now!

TO DO: Update your systems to the most recent version and apply all appropriate security patches available.

3.     Give your personnel training and awareness communications.

People are not born with an innate sense of how to secure information. Your information security and privacy policies, and related necessary work activity information, are not transmuted to them through osmosis. Too bad the majority of business leaders seem to not realize this given the abhorrent lack of good information security and privacy training, and awareness communications, within organizations. You must provide effective training as well as provide ongoing awareness communications so they know how to incorporate effective information protection practices within their daily job activities. Just consider this: one recent study found that 57% of privacy breaches are caused by insiders, most of whom simply made mistakes, or did things not knowing that it would put information at risk. These could have been prevented with good education.

TO DO: Give good and effective information security and privacy training to ALL your employees, and send them ongoing reminders and other types of awareness communications.

4. Do a security and privacy audit.

Do you just assume that all your privacy and security controls are enough and working just fine? Do you assume that all security and privacy risks have been appropriately mitigated? If your answer is yes, is this because you have confidence following a recent (as in the past few months) risk assessments? If you’re making these assumptions based upon old risk assessments, or through blind trust in the absence of risk assessments, then you are putting your organization at great risk of becoming the next cybersecurity breach incident to be in the headlines. You need to do risk assessments regularly. The more time from a risk assessment, the more the business has changed, and potentially had new risks created.

TO DO: Do regular information security and privacy risk assessments and mitigate the discovered risks appropriately.

5.    Make your security and privacy practices transparent.

Do you have a privacy notice and information security policy posted on your website? Does it accurately reflect the current practices of your organization? Do your employees know what they say? Do their work activities support what the statements and policies say?

TO DO: Create a clear, accurate and easy to understand web site privacy practices statement and information security policy. Keep them updated to reflect changes in your organization’s practices.

6.    Find out what your contracted third parties are doing.

When you entrust contracted third parties to access your data, and all forms of information, and the associated systems and physical locations, you retain a level of responsibility for the actions of those third parties. Do they have an effective information security and privacy and security program in place? You need to vet and maintain a level of oversight for your third parties and their security and privacy practices. If they have a privacy breach or security incident, you will ultimately be held responsible in some manner.

TO DO: Ask your third parties to provide you with the results of a recent risk assessment; high level to get something started and to get quick results if they don’t have a recent risk assessment report available. Then establish ongoing oversight of your third parties’ information security and privacy practices.

Bottom line for organizations of all sizes…

These six actions are just the start of improving, or building, your information security and privacy program into one that is effective, comprehensive and up-to-date. And certainly every organization, of every size, in every location, in every industry, needs to have an effective, comprehensive information security and privacy program in place. Doing the six actions listed above will help you to see where you need to make improvements. Every month should really be Cyber Security Awareness Month for all organizations.

This was cross-posted from the Privacy Professor blog.

Copyright 2010 Respective Author at Infosec Island]]>
Mana Tutorial: The Intelligent Rogue Wi-Fi Router Tue, 21 Oct 2014 11:59:41 -0500 “Mana” by Dominic White (singe) & Ian de Villiers at Sensepost, is an amazing full feature evil access point that does, well, just about everything. Just install and run it and you will in essence receive Wi-Fi credentials or “Mana” from heaven!

Here is a link to the creator’s Defcon 22 presentation:

Not sure where to start with this one. Like other rogue Wi-Fi AP programs Mana creates a rogue AP device, but Mana does so much more.

It listens for computers and mobile devices to beacon for preferred Wi-Fi networks, and then it can impersonate that device.

Once someone connects to the rogue device, it automatically runs SSLstrip to downgrade secure communications to regular HTTP requests, can bypass/redirect HSTS, allows you to perform MitM attacks, cracks Wi-Fi passwords, grabs cookies and lets you impersonate sessions with Firelamb.

But that is not all; it can also impersonate a captive portal and simulate internet access in places where there is no access.

Mana is very effective and, well, pretty scary!

Before we get started, for best success use Kali Linux v.1.08.

And as always, this article is for educational purposes only, never try to intercept someone else’s wireless communications. Doing so is illegal in most places and you could end up in jail.

Mana Tutorial

1. Download and unzip Mana from
2. Run the install

Mana will then install libraries and other dependencies to work properly.

Once completed the install places the Mana program in the /usr/share/mana-toolkit directory, config files in /etc/mana-toolkit, and log files and captured creds in /var/lib/mana-toolkit.

3. Open the main config file /etc/mana-toolkit/hostapd-karma.conf

Here you can set several of the options including the default Router SSID which by default is “Internet”. Something like “Public Wi-Fi” may be more interesting. The other main setting here is “karma_loud” which sets whether mana impersonates all AP’s that it detects or not.

Lastly, all we need to do is run one of Mana’s program scripts located in usr/share/mana-toolkit/run-mana. The scripts are:


Mana Scripts

For this tutorial let’s just run Mana’s main “full” attack script.

4. Attach your USB Wi-Fi card (TL-WN722N works great).
5. Type “iwconfig” to be sure Kali sees it.


6. Type, “./” to start Mana.

Mana then starts the evil AP, SSLstrip and all the other needed tools and begins listening for traffic:

Mana running

Once someone connects, Mana will display and store any creds and cookies detected as the victim surfs the web.

7. When done, press “Enter” to stop Mana

To check what you have captured run to view captured cookie sessions:

Mana firelamb

This asks which session you want to try from the captured cookie sessions. It then tries to open the session in Firefox. If the user is still logged in you could take over their session.

You can also review the log files manually in /var/lib/mana-toolkit.

Mana works equally well against laptops and mobile devices. And the inherent trust of “preferred Wi-Fi networks” that most systems use makes this tool very effective at intercepting and impersonating wireless routers.

To defend against this type of attack turn off your wi-fi when not in use. Be very careful of using free or public Wi-Fi networks. Also, it would be best to perform any secure transactions over a wired LAN instead of using Wi-Fi!

If you enjoyed this tutorial and want to learn more about computer security testing, check out my book, “Basic Security Testing with Kali Linux“.

This was cross-posted from the Cyber Arms blog.

Copyright 2010 Respective Author at Infosec Island]]>
Why Two-Factor Authentication is Too Important to Ignore Tue, 21 Oct 2014 11:46:56 -0500 In August, it happened again: a headline-grabbing warning that 1.2 billion passwords had been stolen by a Russian cyber gang, dubbed CyberVor, caused quite a stir. While questions were raised about the legitimacy of the CyberVor report and the scant details surrounding it in the past, these types of events did not even make it into specialized magazines and news services, much less major news outlets. And if they did, superlatives were required to capture anyone’s attention. However, just because password theft may not always garner a big news report, it doesn’t mean it isn’t happening all the time.

On the contrary, and especially during the past year, quite a few companies have admitted to being victimized by data breaches and losing control of large amounts of data. Big retail chains Home Depot and Target experienced security breaches that culled information from more than 100 million cards combined, while 233 million eBay users were put at risk of identity theft after an online security breach. 

Going forward, we have to be prepared for the possibility that private information provided to a third party, like a merchant or a public agency, will be stolen. What does this mean for the security of user passwords? “Set it and forget about it” password security simply does not exist anymore. Passwords today can only be regarded as a temporary security measure that should be limited in both time of use and number of accounts.

Nevertheless, experience shows that users recycle the same password for many or all of their accounts. For many, it’s just not feasible to memorize dozens of unique passwords that are sufficiently strong.

Users can avoid this problem and improve their data security by implementing a secure password safe, such as 1Password or KeePass, on their end devices and by using a really strong password to secure it. The safe contains the passwords of all accounts and automatically applies them during the login procedure.

Two-factor authenticationis equally as safe. In addition to a password, the user is required to have a second component for verification. With this method, the user has to combine knowledge (password) and ownership (mobile phone, token).

Two-factor authentication has long been a standard for safety-critical applications. For example, it has been possible for years to secure VPN remote access using a second authentication factor. In the past, the “something you have” component of two-factor authentication consisted of a small token displaying a number necessary for login. The user had to enter this one-time password (OTP) in addition to the password. Now, other solutions are available that do not require the use of tokens. Select VPN solutions with Secure Enterprise Management (SEM) capabilities, for example, allow for use of OTP with mobile phones or smartphones.

With the exception of online banking providers, websites have rarely offered two-factor authentication. However, due to the increasing frequency of data theft, more sites are offering it. For example, Microsoft (OneDrive,, etc.) and Facebook now offer two-factor authentication, and Dropbox can also be secured with a second login factor. This added layer of security helps reduce the risk of data theft even if a user could not resist picking his pet’s name for a password, or if he decided to pick the most popular password worldwide: “123456.”

This was cross-posted from the VPN HAUS blog.

Copyright 2010 Respective Author at Infosec Island]]>
Hacker Myths Debunked Mon, 20 Oct 2014 12:12:23 -0500 By: David Bisson

We’ve been hearing a lot about hackers recently, mostly in connection to serious data breaches. We think of hackers compromising the nude photographs of popular female celebrities, including Jennifer Lawrence and Kate Upton. We think of them stealing 56 million Home Depot customers’ credit card information. Or using Backoff malware to infiltrate Kmart or Dairy Queen.

All of these incidents teach us to think of hackers as nefarious individuals.They will stop at nothing to degrade our privacy, steal our identities, and ruin our experiences in cyberspace. Their craft is dishonorable, and so they deserve to be hated—and feared.

But is this stereotypical? Are all hackers like this?

In honor of National Cyber Security Awareness Month, which aims to improve user awareness about cyber threats online, below we problematize some of the most common hacker stereotypes we’ve come to learn and love. We do this in an effort to appreciate hacking for the complicated, variable and highly individualized practice that it is.

   Myth #1: Hackers Are Maladjusted Young People Who Live In Their Mothers’ Basements

shutterstock_150161756We all know this one quite well. Some of the most dangerous hackers—the myth goes—wear black T-shirts, have long hair and are under 30 years of age. They spend all of their time on the computer – a passion which they use to isolate themselves from the rest of society. They are weird and maladjusted, which helps to explain why they want to do what they do.

Sure, there might be hackers that fit this stereotype but countless others do not. Take the idea that hackers spend endless hours at the computer—this is a common misperception of computer scientists that, despite its wide appeal, doesn’t hold any water. In fact, many hackers have balanced relationships with their computers while others even have “day jobs” and just hack on the side.

Hackers can have healthy relationships with their peers and families and have proven records of academic excellence in school. Some may be young, but others are not, having spent decades accumulating their technical expertise. Many are well-adjusted to society, which in one light could make some hackers more dangerous.

John Walker, CTO of the Cytelligence Cyber Forensics OSINT Platform and a Blogger for Tripwire, explains: “There are [some] in our midst equally dangerous and very well accomplished over a number of years in which they have learned their trade, honed their skills, and could just be that guy sitting next to you in your office – so think again, don’t make too many preconceived judgements, and remember to consider the ‘Unusual Suspect Factor.’”

Myth #2: Hacking Is A “Boys Only” Club

shutterstock_158390291Hacking may be a predominantly male activity but that doesn’t mean that there aren’t female hackers out there. For instance, a loose 22-year-old group of women known as Haecksen, a hacker club that uses for its name the German word for “witch,” helped organize the Chaos Computer Club (CCC) Congress in 2010.

Other female hackers have spoken at DefCon or write viruses that destroy information instead of stealing it. We might hear the most about male hackers, but women are just as active in hacking communities.

Myth #3: All Hackers Are Masters of Their Craft

The way we paint hackers today elevates them to a level of unmatched technical prowess. Using this platform of expertise, they compromise any system they want with ease, regardless of whatever security protocols may be in place. Subsequently, as information security professionals, we are forced to play defense against these computer masters. shutterstock_131313473

Mark Stanislav, Security Project Manager at Duo Security, explains this is not always the case: “Manipulation of systems is often as predictable as watching the sunrise from the east every morning. After enough practice and/or education, a hacker of a specific context can likely say, ‘Oh, I’d totally try to do XYZ to hack that’ given a scenario.”

Additionally, not all hackers are necessarily skilled computer programmers. Sometimes all hackers need to know is where to look with respect to a particular system configuration or maybe they let a tool do that for them, despite having minimal understanding of how the tool works. Ultimately, we all know that it doesn’t take a computer expert to break into a network.

Myth #4: All Hacking Is Bad

The notion that all hackers intend to cause harm is one of the biggest hacking myths today. Lamar Bailey, Director of Security R&D at Tripwire, says:

shutterstock_169402199“Hacking systems to gain access to data or features that are denied to the current user is the most popular definition that most people think of when it comes to hackers, but it goes much deeper. Hacking hardware to add new features has become a very popular way to extend the life and increase the security of all devices in our homes.”

Ultimately, hacking has less to do with compromising data then with developing creative solutions to technical problems. Ken Westin of Tripwire rightly notes this fact: “Hacking is about understanding the underlying nature of technology—knowing specifically how things work from a high level all the way down to its most granular components. When you fully understand how things work, there is power in being able to manipulate it, shape it and utilize it in ways it may not have been intended to.”

In this sense, hacking, like many other things, comes down to intentions. Ethical hacking can improve the security of various products, whereas malicious hacking seeks to undermine data integrity. It’s how people hack which shapes the nature of a particular incident.

Hacking In All Its Colors

We hear a lot about hackers these days, but mainly those who are after people’s personal and financial information. The majority of hackers out there aren’t social miscreants who are technical masters bent on shutting down the Internet. They may be less knowledgeable, or they may be in the hacking business for the sake of computer security. The sooner we realize hacking’s variability, the sooner we can champion the whitehats who are helping to protect us, and the sooner we can broaden our focus to target those who threaten our security online.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
The ASV Process Is Broken – Part 1 Mon, 20 Oct 2014 10:34:25 -0500 The topic of ASV scanning came up as usual at the 2014 PCI Community Meeting.  The questions all seemed to revolve around how to obtain a passing scan.  What the Council representatives suggested is that multiple scans can be put together to create a passing scan.  Unfortunately, what the Council keeps suggesting as the solution is impossible to implement and here is why.

In a typical environment, an ASV customer logs onto their account with the ASV and schedules their ASV scans of their PCI in-scope assets.  The customer may also add or subtract the number of IP addresses that are scanned as the scope of their external environment may change.  Depending on a number of factors, there may be one scan or multiple scans.  The vulnerability scans are executed on the schedule and the results are returned to the customer.

If there are false positive results or results the customer does not agree, they can apply back to the ASV to have those results removed.  If there are actual vulnerabilities, the customer can contact the ASV with how they have mitigated the vulnerabilities and the ASV can either accept those mitigates and give the customer a passing scan or allow the results to stand.

So where are the problems?

Whether or not the Council acted on facts that cheating was occurring or anecdotal evidence is unknown.  But because of the potential for cheating by customers, the Council mandated a number of years ago that ASVs lock down their scanning solutions so that customers cannot modify anything regarding testing other than the IP addresses involved.  The ASV Program Guide v2.0 on page 11, states:

“However, only an authorized ASV employee is permitted to configure any settings (for example, modify or disable any vulnerability checks, assign severity levels, alter scan parameters, etc), or modify the output of the scan.  Additionally, the ASV scan solution must not provide the ability for anyone other than an authorized ASV employee to alter or edit any reports, or reinterpret any results.”

So right off the bat, the Council’s recommendation of “putting together multiple reports” is not as easily accomplished based on their earlier directives.  That is because it will require the ASV’s customer to get the ASV to agree to put together multiple reports so that they can achieve a passing scan.  That implies that the ASV’s solution will even accommodate that request, but then the ASV needs to be agreeable to even do that task.  Based on the Council’s concerns regarding manipulation of scanning results and the threat of the Council putting ASVs in remediation, I do not believe the ASVs will be agreeable to combining reports as that would clearly be manipulating results to achieve a passing scan.

But it gets worse.  As a lot of people have experienced, they can scan one day and get a passing scan and then scan a day or even hours later and get a failing scan.  The reason this happens is that the vulnerability scanning vendors are adding vulnerabilities to their signature sets as soon as they can, sometimes even before vendors have a patch.  As a result, it is very easy to encounter different results from scan to scan including failing due to a vulnerability that does not yet have a solution or the vendor only just provided a patch.

But if that is not enough, it gets even worse.  Statistically, the odds of getting a passing scan are nearly impossible and gets even worse if you are only doing quarterly scanning.  A review of the National Vulnerability Database (NVD) shows that 94% of vulnerabilities from 2002 to 2014 have a common vulnerability scoring system (CVSS) score of 4.0 or greater.  That means that it is almost impossible to obtain a passing vulnerability scan, particularly if you are only scanning quarterly, when vulnerabilities are announced almost daily and vendors such as Microsoft are coming out monthly with patches.  Those of you scanning monthly can attest that even on a 30 day schedule, a passing scan is nearly impossible to get.

For an organization that has only one Web site, this situation is likely not a problem.  But when organizations have multiple Web sites which a lot of organizations large and small have, you are really struggling in some cases to get passing scans.

But let us add insult to injury.  A lot of organizations have their eCommerce environments running on multiple platforms such as Oracle eCommerce or IBM Websphere.  In those examples, this situation becomes a nightmare.

Platforms such as those from Oracle and IBM may run on Windows or Linux, but Oracle and IBM do not allow the customer to patch those underlying OSes as they choose.  These vendors ship quarterly, semi-annually or on some other schedule, a full update that patches not only their eCommerce frameworks, but also the underlying OS.  The vendors test the full compatibility of their updates to ensure that the update will not break their frameworks.  In today’s 24x7x365 world, these vendors can run into serious issues if eCommerce sites begin to not function due to an update.  However, that also means there is the possibility that critical patches may be left out of an update due to compatibility and stability reasons.  As a result, it is not surprising that in some updates, vulnerabilities may still be present both those that are new and those that have been around for a while.

But if Oracle and IBM are not patching on 30 day schedules, that means there is a high likelihood that the scans will not be passing.  This means that the customer must go to their ASV with compensating controls (CCW) to mitigate these vulnerabilities to obtain passing scans.

The bottom line is that the deck is stacked against an organization obtaining a passing scan.  While the Council and the card brands do not recognize this, the rest of the world sure has come to that determination.

In Part 2, I will discuss the whole ASV approach and how I believe the drive to be the cheapest has turned the ASV process into a mess.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
Last Chance to Register for 2014 ICS Cyber Security Conference Fri, 17 Oct 2014 13:21:00 -0500 ATLANTA--(BUSINESS WIRE)--On Monday, October 20, 2014, attendees from around the world will gather in Atlanta, Georgia for the 2014 Industrial Control Systems (ICS) Cyber Security Conference.

“It has become paramount that critical infrastructures balance the needs of ICS reliability and safety with cyber security”

Held from October 20 – 23, 2014 at the Georgia Tech Hotel and Conference Center, hundreds of professionals will benefit from the robust exchange of technical information, actual incidents, insights, and best practices to help protect critical infrastructures from cyber attacks.

As the longest-running cyber security-focused conference for the industrial control systems sector, the event will cater to the energy, utility, chemical, transportation, manufacturing, and other industrial and critical infrastructure organizations.

Produced by SecurityWeek, the conference will address the myriad cyber threats facing operators of ICS around the world, and will address topics covering ICS, including protection for SCADA systems, plant control systems, engineering workstations, substation equipment, programmable logic controllers (PLCs), and other field control system devices.

The Conference is unique and has historically focused on control system end-users from various industries and what cyber vulnerabilities mean to control system reliability and safe operation. It also has a history of having discussions of actual ICS cyber incidents.

Online registration is still available, and full conference passes include educational workshops on Monday.

“The IT community and ICS community are now bridging the security gap,” said Joe Weiss, founder of the ICS Cyber Security Conference, which was recently acquired by SecurityWeek. “It has become paramount that critical infrastructures balance the needs of ICS reliability and safety with cyber security,” Weiss added.

Sponsors of the conference include Honeywell, Ultra Electronics 3eTI, BAE Systems, Lockheed Martin, Siemens, Waterfall Security, Symantec, Check Point Software Technologies, PFP Cybersecurity and Kaspersky Lab.

For more information and online registration visit:

About the ICS Cyber Security conference:

The ICS Cyber Security Conference is the conference where ICS users, ICS vendors, system security providers and government representatives meet to discuss the latest cyber-incidents, analyze their causes and cooperate on solutions. Since its first edition in 2002, the conference has attracted a continually rising interest as both the stakes of critical infrastructure protection and the distinctiveness of securing ICSs become increasingly apparent.

About Security Week:
The mission of SecurityWeek is to help information security professionals do their jobs better by providing timely and insightful news, information, viewpoints, analysis and experiences from experts in the trenches of information security. With content written by industry professionals and a seasoned news team, SecurityWeek produces and distributes insightful and useful content and data to information security professionals around the globe. Visit for more information.

Copyright 2010 Respective Author at Infosec Island]]>
The Chinese Truly are Attacking our Critical Infrastructure Fri, 17 Oct 2014 09:10:16 -0500 There have been many reports of the Chinese and others attacking our critical infrastructure. Last year, Kyle Wilhoit from Trend Micro developed a control system honeypot representing a small water utility in rural Missouri and then identified the attackers some of whom were from China. Bob Radvanovsky from Infracritical took a similar approach and the results are astounding. He acquired some Ruggedom switches from E-Bay and set up a network emulating a well pumping station. Within 2 hours of connecting the systems, he was being attacked primarily from China. This is even more interesting when you realize when the attack started, the honeypot was not seen on Shodan. This shows the level of monitoring going on in China. Bob will be discussing these results October 21st at the ICS Cyber Security Conference in Atlanta.


Copyright 2010 Respective Author at Infosec Island]]>
Acting on MSSP Alerts Thu, 16 Oct 2014 10:24:00 -0500 Have you seen the funnel lately?



In any case, while you are contemplating the funnel, also ponder this:

what do you get from your MSSP, ALERTS or INCIDENTS [if you are getting LOGS from them, please reconsider paying any money for their services]

What’s the difference? Security incidents call for an immediate incident response (by definition), while alerts need to be reviewed via an alert triage process in order to decide whether they indicate an incident, a minor “trouble” to be resolved immediately, a false alarm or a cause to change the alerting rules in order to not see it ever again. Here is an example triage process:



Now, personally, I have an issue with a situation when an MSSP is tasked with declaring an incident for you. As you can learn from our incident response research, declaring a security incident is a big decision made by several stakeholders (see examples of incident definitions here). If your MSSP partner has a consistent history of sending you the alerts that always lead to incident declaration (!) and IR process activation – this is marvelous. However, I am willing to bet that such a “perfect” track record is achieved at a heavy cost of false negatives i.e. not being informed about many potential problems.

So, it is most likely that you get ALERTS. Now, a bonus question: whose job is it to triage the alerts to decide on the appropriate action?



Think harder – whose job is it to triage the alerts that MSSPs sends you?

After you figured out that it is indeed the job of the MSSP customer, how do you think they should go about it? Some of the data needed to triage the alert may be in the alert itself (such as a destination IP address), while some may be in other systems or data repositories. Some of said systems may be available to your MSSP for access (example: your Active Directory) and some are very unlikely to be (example: your HR automation platform). So, a good MSSP will actually triage the alerts coming from their technology platform to the best of their ability – they do have the analysts and some of the data, after all. So, think of MSSP alerts as of “half-triaged” alerts that requires further triage.

For example:

  • NIPS alerts + firewall log data showing all sessions between the IP pair + logs from an attack target + business role of a target (all of these may be available to the MSSP) = high-fidelity alert that arrives from the MSSP; it can probably be acted upon without much analysis
  • NIPS alerts + firewall log data (these are often available to the MSSP) = “half-triaged” alerts that often need additional work by the customer
  • NIPS alerts (occasionally these are the only data available) = study this Wikipedia entry on GIGO.

A revelation: MSSPs are in the business of … eh… business. So, MSSP analysts are expected to deliver on the promise of cost-effectiveness. Therefore, the quality of their triage will depend on the effectiveness of their technology platform, available data (that customers provide to them!), skills of a particular analyst and – yes! – expected effort/time to be spent on each alert (BTW, fast may mean effective, but it may also mean sloppy. Slow may mean the same…)

Another revelation: MSSP success with alert triage will heavily depend on the data available to their tools and analysts. As a funny aside, will you go into this business: I will send you my NIDS alerts only (and provide no other data about my IT, business, etc) and then offer to pay you a $1,000,000 if you only send me the alerts that I really care about and that justify an immediate action in my environment. Huh? No takers?

So, how would an MSSP customer triage those alerts? They need (surprise!):

  • People i.e. security analysts who are willing and capable of triaging alerts
  • Data i.e. logs, flows, system context data, business context, etc that can be used to understand the impact of the alerts.

The result may look like this:


Mind you that some of the systems that house the data useful for alert triage (and IR!) are the same systems you can use for monitoring – but you outsourced the monitoring to the MSSP. Eh… that can be a bit of problem :-) That is why many MSSP clients prefer to keep their own local log storage inside a cheap log management tool – not great for monitoring, but handy for IR.

Shockingly, I have personally heard about cases where MSSP clients were ignoring 100% of their MSSP alerts, had them sent to an unattended mailbox or hired an intern to delete them on a weekly basis (yup!). This may mean that their MSSP was no good, or that they didn’t give them enough data to do their job well… As a result, your choice is:

  • you can give more data to an MSSP and [if they are any good!] you will get better alerts that require less work on your behalf, or
  • you can give them the bare minimum and then complain about poor relevance of alerts (in this case, you will get no sympathy from me, BTW)

And finally, an extra credit question: if your MSSP offers incident response services that costs extra, will you call them when you have an incident that said MSSP failed to detect?! Ponder this one…

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
When Remote Access Becomes Your Enemy Thu, 16 Oct 2014 10:15:13 -0500 As convenient as it would be for businesses to have all their IT service providers working on-site, just down the hall, that’s not always possible. That’s why secure remote access is a component frequently found in the digital toolboxes of service providers that offer maintenance, troubleshooting and support from locations other than where the product or system is being used.

This arrangement makes sense: It saves enterprises time and money.

Yet, that doesn’t mean remote access is always foolproof. Although it’s long been possible to securely implement remote access, sloppy work and carelessness have increasingly created critical vulnerabilities.

In April 2013, for example, it became possible to damage Vaillant Group ecoPower 1.0 heating systems by exploiting a highly critical security hole in the remote maintenance module. The vendor advised customers to simply pull the network plug and wait for the visit of a service technician.

About one year later, AVM, the maker of the Fritz!Box router, also suffered a security vulnerability. For a time, it was possible to gain remote access to routers and, via the phone port functionality, to make phone calls that were sometimes extremely expensive. Only remote access users were affected.

Then, in August 2014, Synology, a network attached storage (NAS) supplier, was affected. In this case, it was possible to gain control over the entire NAS server data through a remote access point.

Finally, at this year’s Black Hat conference in August, two security researchers revealed that up to 2 billion smartphones could be easily attacked through security gaps in software.

It’s clear that these attacks and vulnerabilities are all part of a trend – and they speak to the importance of businesses eliminating remote access security gaps.

Who is Responsible for Securing Remote Access?

There’s no doubt that remote access is an important network feature. IT support speed and troubleshooting capability would be greatly hampered without remote access. It is also needed for mobile workers to establish connections to their corporate networks via a VPN.

VPNs by design are secure and when users implement, maintain and utilize them properly, the technology works perfectly. However, security lapses may occur in cases where a user is unaware that secure remote access has been provided, i.e. it’s more or less a hidden feature, or he does not show any interest in it.

In the Fritz!Box case, the critical issue of increasing digitization in private environments could be seen very clearly. Despite the problem being reported by numerous media outlets and the vendor quickly releasing a firmware update, tens of thousands of routers were still affected, many of them weeks later.

Unfortunately for IT administrators responsible for network security, not every Internet user reads computer magazines and stays up-to-date with information from various news services. Not every router owner has the tech savvy or feels comfortable updating device firmware. They may do the bare minimum – understand the purpose of a VPN and comply with the necessary security policies – but what if they don’t? Or what if they aren’t even aware of security measures?

The value of VPN solutions is that they provide a layer of security protection, for when users unknowingly create security vulnerabilities. This means IT administrators are responsible for improving the security of remote access, by using up-to-date, approved technology and implementing automated update procedures that fix reported bugs quickly and without user intervention.

This was cross-posted from the VPN HAUS blog.


Copyright 2010 Respective Author at Infosec Island]]>
Security Companies Hit Hikit Backdoor Used by APT Group Thu, 16 Oct 2014 01:30:00 -0500 [SecurityWeek] - A coordinated effort by security companies has struck a blow against malware tools used by a cyber-espionage group known as Hidden Lynx.

Hidden Lynx is believed to be based in China and has been tied to attacks against U.S. defense contractors and other organizations around the world. In a collaboration dubbed 'Operation SMN', researchers from a number of companies joined forces to target the Hikit backdoor and other malware used by the group.

The effort was coordinated by security firm Novetta as part of Microsoft's new Coordinated Malware Eradication program, and also involved Symantec, Cisco Systems, FireEye, F-Secure, iSight Partners, ThreatConnect, Tenable, Microsoft, ThreatTrack Security and Volexity. A report with technical details about the effort is set to be released Oct. 28.

"We felt it was important to take action proactively in coordination with our coalition security industry partners," said Novetta CEO Peter B. LaMontagne, in a statement. "The cumulative effect of such coordinated approaches could prove quite disruptive to the adversaries in question and mitigate some of the threat activity that plagues the joint customer base of this coalition."

Read the full report on SecurityWeek. 

Copyright 2010 Respective Author at Infosec Island]]>
Spying Flashlight Apps Reveal User Inattentiveness to Cyber Security Wed, 15 Oct 2014 12:27:00 -0500 By: David Bisson

Our smartphones have become a tool that most of us admit we could not live without. After only a few taps on our screen, we can monitor our inbox, our bank account, our social media networks and now, even our homes.

What we often don’t realize, however, is the amount of personal information our phones actually store and how easily accessible we make this data, not only for ourselves, but for others, too. A recent Android study proves many of us are likely not careful enough.

A group of researchers at Snoopwall—a technology solution that detects and blocks spyware and malware on a variety of platforms—found that the most widely used flashlight apps are furtively stealing personal information stored on users’ mobile devices.

According to the company’s Threat Assessment Report, the top 10 searched flashlight apps in the Google Play Store all perform functions that surpass the basic needs of what flashlight apps should be executing.


Screenshot of “flashlight app” keyword search in Google Play, displaying all malicious flashlight apps detected thus far.

These seemingly harmless apps, which have accumulated half a billion downloads, have put the privacy and security of users at risk simply by requesting overzealous permissions that users unknowingly adhere to, including permission to:

  • Modify or delete the contents of your USB storage
  • Change system display settings
  • Precise location (GPS and network-based)
  • Write Home settings and shortcuts
  • View all network connections

For Ken Westin, a security researcher at Tripwire, this is all too familiar: “There is little vetting of applications before they are deployed. When you install an Android app, it shows you what it has permissions to access, but most people ignore it and just click next to get the app installed. There are a lot of free apps that have permissions on devices they shouldn’t, even ‘security’ applications.”

Some users might have felt safe downloading the apps because they installed them using Google Play and not a third-party site but as Tripwire CTO Dwayne Melancon explains, that doesn’t make an app any more secure.

“Android is pretty ‘Wild Wild West’ because the apps are not well curated,” said Melancon. “People often misunderstand the warning not to download apps from unknown or trusted sources. They’ll say, ‘I got it off the Play store—I trust that source’ without realizing the unknown and untrusted author of the app is the actual source.”

For the short term, users are encouraged to uninstall any of the malicious flashlight apps listed here. If your app is able to modify your phone’s storage and/or write settings, it is recommended that you reset your phone. A factory reset and/or complete wipe might be necessary.

Going forward, users are recommended to follow a number of best practices that optimize both their privacy and security on their mobile devices, such as:

  1. Disabling GPS, except when traveling or in the event of an emergency
  2. Disabling Near Field Communications (or iBeacon for Apple devices) permanently
  3. Disabling Bluetooth, except when making a hands-free call while driving
  4. Covering the microphone and/or webcam with tape when neither is in use

Most importantly, however, users need to begin looking at the permissions their apps request of them more closely. We should all be using common sense to ask whether a particular app needs access to the information it wants. If it doesn’t, we’re better off doing some research online and looking for safer alternatives, like this privacy flashlight developed by Snoopwall.

Common sense goes a long way in protecting ourselves online and on our phones, and it’s up to us to accept that responsibility.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Introducing the Shoulders of InfoSec Project Wed, 15 Oct 2014 12:19:49 -0500 "If I have seen further it is by standing on the shoulders of giants"

Most famously attributed to Sir Isaac Newton, this quote reflects the sentiment of a new project.  In InfoSec we all stand on the shoulders of giants.

It was just supposed to be a talk at DerbyCon, but as I dug into the topic I realized it needed to be more than just one talk.

Another relevant quote is George Santayana’s oft-misquoted:

“Those who cannot remember the past are condemned to repeat it.”

In information security we have a very bad habit of ignoring the past; many times it isn’t even a failure to remember, it is a failure to ever have known who and what came before.

Thus, the Shoulders of InfoSec Project.  It is an attempt to compile a lot of information about early figures in InfoSec (and hopefully it will move beyond just the early figures).  There are some great resources out there already, notably the University of Minnesota's Charles Babbage Institute which includes a great set of oral histories of security luminaries.  The goal is not to compete with, but to complement and highlight other relevant projects.

A note about the name: the project’s name is “Shoulders…”, not “Giants…”, because you do not need to be a giant to offer a shoulder to help others see further.  Many people

There are two components to the project at this time, a low-volume blog and the wiki.  The project wiki is a work in progress, it includes an ever-expanding list of names, each with a dedicated page including links to relevant information, and will hopefully gain some more color and context as the project develops.  The wiki also includes a references and resources page which has links to several related sites and projects.

The presentation I delivered at DerbyCon is up on Adrian Crenshaw’s Irongeek site if you would like to see some of the ideas and people featured in this project.

Suggestions and contributions are welcome, see the wiki for information about contribution to the project.

This was cross-posted from the Uncommon Sense Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
SandWorm Hacking Team Exploited 0-day Against NATO and Other Government Entities Tue, 14 Oct 2014 17:05:26 -0500 iSIGHT Partners firm uncovered a Russian hacking team dubbed Sandworm that was running a cyber espionage campaign on NATO and other Government entities.

According to a new report issued by the cyber security firm iSIGHT Partners, a group of Russian hackers has been exploiting a previously unknown flaw in Microsoft’s Windows operating system to spy on NATO, the Ukrainian government, a U.S. university researcher and many other entities. The researchers at ISight dubbed the hacking group SandWorm because of references discovered in its code to the science-fiction novel “Dune.”

The experts at iSIGHT Partners have worked in close collaboration with Microsoft during the investigation. The company announced the discovery of a zero-day vulnerability affecting all supported versions of Microsoft Windows and Windows Server 2008 and 2012. The vulnerability has been classified with the code CVE-2014-4114, and according to the revelation made by iSIGHT has been exploited in the cyber espionage operation on a large scale by a Russia hacking team. The nature of the target and the tactics, techniques, and procedures (TTP) adopted lead the experts to believe that this is the work of state-sponsored hackers.

“This is consistent with espionage activity,” said iSight Senior Director Stephen Ward. “All indicators from a targeting and lures perspective would indicate espionage with Russian national interests.”

Microsoft is already working on a security update for the CVE-2014-4114 that will be available in the next patch updates on the October 14th.

According to the report issued by iSIGHT, the APT has been active since at least 2009. Its targets in the recent campaign also included a Polish energy firm, a Western European government agency and also a French telecommunications firm.

iSIGHT_Partners sandworm timeline_13oct2014

The experts began the investigation in late 2013 when the NATO alliance was targeted by the SandWorm hacking team with exploits other than the zero-day, but they discovered the critical zero-day  in August when the group targeted the Ukrainian government in the lead-up to the NATO summit in Wales.

“In late August, while tracking the Sandworm Team, iSIGHT discovered a spear-phishing campaign targeting the Ukrainian government and at least one United States organization. Notably, these spear-phishing attacks coincided with the NATO summit on Ukraine held in Wales.” states the report published by iSIGHT.

Security experts speculated that the intensification of the cyber dispute  between Russian and Ukraine could have increased the likelihood to discover operations that went under the radar for so long.

iSIGHT Partners sandworm

Below chronological details provided by the researchers on the Sandworm activity:

  • The NATO alliance was targeted as early as December 2013 with exploits other than the zero-day
  • GlobSec attendees were targeted in May of 2014 with exploits other than the zero-day
  • June 2014
    • Broad targeting against a specific Western European government
    • Targeting of a Polish energy firm using CVE-2013-3906
    • Targeting of a French telecommunications firm using a BlackEnergy variant configured with a Base64-encoded reference to the firm

The SandWorm hacking team sent spear-phishing emails with malicious attachments to compromise the victim’s machine. The threat actors mentioned a global security forum in Russia and a purported list of Russian terrorists.

Another element that suggests Russia is responsible for the cyber espionage campaign are codes discovered on  the C&C server, located in Germany, that had not been properly secured and that contains Russian-language computer files that had been uploaded by the hackers.

“They could have closed it off, and they didn’t,” he said of the server. “It was poor operational security.”

The investigators noticed that SandWorm apparently re-engineered malware previously by other APT probably to masquerade its campaigns.

This was edited and cross-posted from the Security Affairs blog.

Copyright 2010 Respective Author at Infosec Island]]>
Security Lessons from Complex, Dynamic Environments Tue, 14 Oct 2014 10:12:41 -0500 Security is hard.

Check that- security is relatively hard in static environments, but when you take on a dynamic company environment security becomes unpossible. I'm injecting a bit of humor here because you're going to need a chuckle before you read this.

Some of us in the security industry live in what's referred to as a static environment. Low rate of change (low entropy) means that you can implement a security control or measure and leave it there, knowing that it'll be just as effective today as tomorrow or the day after. Of course, this takes into account the rate at which effectiveness of security tools degrades, and understanding whether things were effective in the first place. It also means that you don't have to worry about things like a new system showing up on the network very often or a new route to the Internet. And when these do happen, you can be relatively sure something is wrong.

Early on in my career I worked for a technical recruiting firm. Computers were just a tool and companies having websites was a novelty. The ancient Novell NetWare 3.11 systems had not seen a reboot in literally half a decade but nothing was broken so everything just kept running and slowly accumulating inches of dust in the back room. When I worked there we modernized to NT 3.51 (don't laugh, I'm dating myself here) and built an IIS-based web page for external consumption. That place was a low entropy environment. We changed out server equipment never, and workstations every 5 years. If all of a sudden something new showed up in the 30 node network, I'd immediately suspect something was amiss. At the time, nothing that exciting ever happened.

Fast forward a few years and I'm working for a financial start-up. It's the early 2000's and this company is the polar opposite of a static company. We have at least 1 new server coming online a day, typically 5-10 new IP addresses showing up that no one can identify. We get by because we have one thing going for us. That one thing is the on-ramp to the Internet. We have a single T1 which connects us to the rest of the world. We drop a firewall and an IDS (I think we used an early SNORT version, maybe, plus a Sonic Wall firewall). When that changed and our employees started to go mobile and thus VPN things got a little hairy.

Fast forward another few years and I'm working at one of the world's largest companies on arguably one of the most complex networks mankind has ever seen. Forget trying to understand or know the everything - we're struggling to keep track of the few things we DO know. Heck we spend 4 weeks NMap'ing (and accidentally causing a minor crisis, oops) our own IP subnets to find all the NT4 systems when support finally and seriously for real this time, ran out.

Now let's look at security in the context of this article (and reported breach) - Let me highlight a few key quotes for you-

"The event was complicated by the fact that the company had undergone corporate acquisitions, which introduced more network connections, and consequently a wider attack surface. The firm had more than 100 entry and exit points to the Internet."

You may chuckle at that, but I bet you have pretty close to this at your organization. Sure, maybe the ingress/egress points you control are few, and well protected, but it's the ones you don't know about which will hurt you. Therein lies the big problem - the disconnect between business risk and information security ("cyber") risk. If information security isn't a part of the fabric of your business, and a part of the core of the business decision-making process you're going to continue to fail big, or suffer by a thousand papercuts.

While not necessarily as sexy as that APT Defender Super Deluxe Edition v2.0 box your vendor is trying to sell you, network and system configuration management, change management and asset management are things you absolutely must get right, and must be involved in as a security professional for your enterprise. The alternative is you have total chaos wherein you're trying to plug each new issue as you find out about it, while the business has long forgotten about the project and has moved on. This sort of asynchronous approach is brutal in both human effort and capital expenditure.

Now let's focus on another interesting quote from the article. Everyone like to offer advice to breach victims, as if they have any clue what they're saying. This one is a gem-

"Going forward, “rearchitecting the network is the best approach to ensure that the company has a consistent security posture across its wide enterprise," officials advised."

What sort of half-baked advice is that?! Those of you who have worked incidents in your careers, have you ever told someone that the best thing to do with your super-complex network is to totally rearchitect it? How quickly would you get thrown out of a 2nd story window if you did? While this advice sounds sane to the person who's saying it - and likely has never had to follow the advice - can you imagine being given the task of completely rearchitecting a large, complex network in-place? I've seen it done. Once. And it took super-human effort, an army of consultants, more outages than I'd care to admit, and it was still cobbled together in some places for "legacy support".

Anyway, somewhere in this was a point about how large, complex networks and dynamic environments are doomed to security failure unless security is elevated to the business level and becomes an executive priority. I recognize that not every company will be able to do this because it won't fit their operating and risk models - but if that's the case you have to prepare for the fallout. In the cases where risk models say security is a business-level issue you have a chance to "get it right"; this means you have to give a solid effort and align to business, and so on.

Security is hard, folks.

This was cross-posted from the Follow the Wh1t3 Rabbit blog.

Copyright 2010 Respective Author at Infosec Island]]>
Lawyer Or Security Professional? Mon, 13 Oct 2014 13:05:11 -0500

“It depends upon what the meaning of the word ‘is’ is. If ‘is’ means ‘is and never has been’ that’s one thing – if it means ‘there is none’, that was a completely true statement.” –President of The United States of America, William Clinton

It has been an interesting time as the December 31, 2014 deadline approaches and version 2 of the PCI DSS comes to its end of life.  I have started to notice that there are a lot of security professionals and others that are closet lawyers based on the discussions I have had with some of you regarding compliance with the PCI DSS.

The first thing I want to remind people of is that if you do not want to comply with one or more of the PCI DSS requirements, all you have to do is write a position paper defining for each requirement you find onerous, why it is not relevant or not applicable for your environment and get your management and acquiring bank to sign off on that paper.  But stop wasting your QSA’s or ISA’s time with your arguments.  It is not that we do not care, but without such approval from your management and acquiring bank, QSAs and ISAs cannot let you off the hook for any requirement.

With that said, the first lawyerly argument we are dealing with these days revolves around the December deadline.  We continue to get into arguments over what the deadline actually means.

It appears that the PCI SSC and card brands’ repeatedly saying that version 2 is done as of December 31, 2014 was not clear enough for some of you.  And further clarifications from them that any reports submitted after that date must be under version 3 are also apparently too much for some of you to handle.  I do not know how there could be any misinterpretation of ‘DEADLINE’, ‘DONE’ or “AFTER THAT DATE’ but apparently, there are a lot of people out in the world that do not understand such words and phrases.  Then there are the amazing contortions that some people will go to in a twisted dance to the death to get around this deadline.

Where have you been?  How could you have missed this deadline?  It has been known since the PCI SSC announced their change when standard updates would be issued back with the release of the PCI DSS v2 more than three years ago.  But even assuming you were not involved back then, the PCI SSC announced the deadline over a year ago with the release of PCI DSS v3.  Either way, it certainly should not have been a surprise as there has been plenty of warning.

But then do not take this out on your QSA.  QSAs are just the messenger in this process and had nothing to do with setting the deadline.  The PCI SSC and the card brands set that deadline.  You have a problem with the deadline, complain to them.  But if you are willing to listen, I can save you that discussion.  They will politely tell you the deadline is the deadline.  You are out of luck.  If you do not like that answer, then stop taking credit/debit cards for payment for your organization’s goods and services.

The next lawyerly argument is around the June 30, 2015 deadlines for requirements 6.5.10, 8.5.1, 9.9, 11.3 and 12.9.  Again, it is as though these dates were kept from you, which they were not.  I even wrote a post about these requirements titled ‘Coming Attractions’ back in September 2013.

For those that are calendar challenged, June 30, 2015 is practically just around the corner in business terms.  If you had years to get ready for the PCI DSS v3, what makes you think that you can just turn something on in a year and a half?  Yet we continually see people arguing that until that date, they are not going to address any of these requirements.  All as though, like a light switch, something magical will occur on July 1, 2015 that will meet those requirements.

For merchants, requirements 9.9 and 11.3 are going to be huge issues particularly for those of you with large networks and lots of retail outlets.  If you have not gotten started on these requirements now, there is no way you will be compliant with these requirements by July 1.  Both of these require thought, planning and training.  They cannot just be started overnight resulting in compliance.

For requirement 11.3, the new approach required for penetration testing is resulting in vulnerabilities being uncovered.  Organizations that did not want to get caught flat footed are finding that their network segmentation is not as segmented as they once believed.  They are also finding new “old” vulnerabilities because of these network segmentation issues.  The bottom line is that these early adopters are scrambling to address their penetration testing issues.  In some cases ACLs need to be adjusted, but I have a few that have found they need to re-architect their networks in order to get back to compliance.  Obviously the latter is not an overnight kind of fix.

Requirement 9.9 is all about ensuring the security of points of interaction (POI) as card terminals are referred.  Because of all of the POI tampering and hacks that have occurred, the Council has added the requirements in 9.9 to minimize that threat.  The biggest problems early adopters are running into is getting their retail management and cashiers trained so that they understand the threats and know how to deal with those threats.  This requires creating new procedures for daily or more often inventorying of the POIs and visually inspecting them to ensure they have not been tampered with.  Companies are rolling out serialized security tape that must be applied to the seams of POIs so that any opening of the case can be visually determined.  Locking cradles are being installed for every POI to secure them to the counter.  Let alone implementing those new procedures for doing at least daily inspections and what to do if you suspect tampering and how to inform corporate of potential issues.  Again, not something that just happens and works day one.

For service providers, besides 11.3, requirement 8.5.1 is going to be their biggest issue.  This requires the service provider to use different remote access credentials for every customer.  This is in response to the breaches that occurred at a number of restaurants in Louisiana a few years ago as well as more recent breaches.

The problem that early adopters of 8.5.1 are finding is that implementing enterprise-wide credential vaults is not as simple as it appears.  The biggest impact with these implementations is that service providers start missing their service level agreements (SLA).  Missing SLAs typically costs money.  So these service providers are not only incurring the costs related to implementing the credential vault solution, but they are suffering SLA issues that just pile on the injuries.

But the final straw is all of the people that closely parse the PCI DSS and only the DSS.  You saw this with some of the questions asked at the latest Community Meeting.  You also see it in the questions I get on this blog and the prospects and I clients I deal with daily.  These people are hunting for a way to get around complying with a particular requirement.

This occurs because people only read the DSS and not the Glossary, information supplements and other documents provided by the Council.  At least with v3 of the DSS the Council included the Guidance for each of the requirements.  Not that adding Guidance makes a whole lot of difference based on the arguments laid out by some people.  The Council could do us all a favor if they generally published the Reporting Template with all of the other documents.  Not so much that people would necessarily read it, but it would give QSAs and ISAs more ammunition to use when these discussions come up.

Successful security professionals understand the purpose of security frameworks.  These frameworks are meant to share the collective knowledge and lessons learned regarding security with everyone so that everyone can have a leg up and know ways of detecting and mitigating threats.  Successful security professionals use these frameworks to get things done, not waste their time developing scholarly legal arguments or twisting the English language as to why they do not need to meet some security requirement.  They put their heads down, review the frameworks, develop plans to implement the changes necessary to improve security, work the plan and deliver results.  Do those plans always meet requirement deadline dates?  Not always, but they are close or as close as they can get given other business issues.

The bottom line is that security professionals are not lawyers and good security professionals certainly do not sound like lawyers.  But if you constantly find yourself sounding like a lawyer digging so deep to split legal hairs, in my very humble opinion, you really need to re-examine your career or lack thereof.  I say lack thereof because, in my experience, security professionals that operate like lawyers do not have long careers.  They move around a lot because once people realize that they cannot deliver, they are forced to move on.  Eventually a reputation is developed and after that point these people end up forced to find a new career because the security community knows their modus operandi.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>