Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 FBI Says North Korea Was Behind Sony Hack Fri, 19 Dec 2014 11:50:00 -0600 The FBI has officially called out North Korea as being responsible for the recent high profile cyber attack on Sony Pictures, noting that "such acts of intimidation fall outside the bounds of acceptable state behavior." 

"The FBI now has enough information to conclude that the North Korean government is responsible for these actions," a statement released by the FBI friday afternoon said.

The full text of the FBI press release follows:

Update on Sony Investigation 

Today, the FBI would like to provide an update on the status of our investigation into the cyber attack targeting Sony Pictures Entertainment (SPE). In late November, SPE confirmed that it was the victim of a cyber attack that destroyed systems and stole large quantities of personal and commercial data. A group calling itself the “Guardians of Peace” claimed responsibility for the attack and subsequently issued threats against SPE, its employees, and theaters that distribute its movies.

The FBI has determined that the intrusion into SPE’s network consisted of the deployment of destructive malware and the theft of proprietary information as well as employees’ personally identifiable information and confidential communications. The attacks also rendered thousands of SPE’s computers inoperable, forced SPE to take its entire computer network offline, and significantly disrupted the company’s business operations.

After discovering the intrusion into its network, SPE requested the FBI’s assistance. Since then, the FBI has been working closely with the company throughout the investigation. Sony has been a great partner in the investigation, and continues to work closely with the FBI. Sony reported this incident within hours, which is what the FBI hopes all companies will do when facing a cyber attack. Sony’s quick reporting facilitated the investigators’ ability to do their jobs, and ultimately to identify the source of these attacks.

As a result of our investigation, and in close collaboration with other U.S. government departments and agencies, the FBI now has enough information to conclude that the North Korean government is responsible for these actions. While the need to protect sensitive sources and methods precludes us from sharing all of this information, our conclusion is based, in part, on the following:

  • Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.
  • The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.
  • Separately, the tools used in the SPE attack have similarities to a cyber attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea.

We are deeply concerned about the destructive nature of this attack on a private sector entity and the ordinary citizens who worked there. Further, North Korea’s attack on SPE reaffirms that cyber threats pose one of the gravest national security dangers to the United States. Though the FBI has seen a wide variety and increasing number of cyber intrusions, the destructive nature of this attack, coupled with its coercive nature, sets it apart. North Korea’s actions were intended to inflict significant harm on a U.S. business and suppress the right of American citizens to express themselves. Such acts of intimidation fall outside the bounds of acceptable state behavior. The FBI takes seriously any attempt—whether through cyber-enabled means, threats of violence, or otherwise—to undermine the economic and social prosperity of our citizens.

The FBI stands ready to assist any U.S. company that is the victim of a destructive cyber attack or breach of confidential business information. Further, the FBI will continue to work closely with multiple departments and agencies as well as with domestic, foreign, and private sector partners who have played a critical role in our ability to trace this and other cyber threats to their source. Working together, the FBI will identify, pursue, and impose costs and consequences on individuals, groups, or nation states who use cyber means to threaten the United States or U.S. interests.

Copyright 2010 Respective Author at Infosec Island]]>
Should I Use “SIEM X” or “MSSP Y”? Thu, 18 Dec 2014 10:25:22 -0600 Lately I’ve been surprised by some organizational decision-making as they think about their sourcing choices for security monitoring. Specifically, some organizations want to decide between “SIEM Brand X” and “MSSP Brand Y” before they decide on the model – staffed in-house, managed, co-managed, outsourced, etc. While on some level this makes sense (specifically, on a level of “spend $$$ – get a capability” whether from a vendor tool run by employees/consultants or from a service provider), it still irks me for some reason.

Let’s psychoanalyze this! IMHO, in real-life nobody decides between “BART or Kia” or “Uber or BMW” – people think first about “should I buy a car or use public transportation?” then decide on a vehicle or the most convenient mode of transportation. In one case, your money is used to buy a tool, piece of dead code that won’t do anything on its own and requires skilled personnel to run. In another case, you are essentially renting a tool from somebody and paying for their analysts time. As a sidenote, occasionally, I see a request for something that looks and behaves as BOTH a SIEM and a MSSP, such as a request for managed SIEM contract (“If you write an RFP for a car AND for a bus pass as one document, you’d get an RFP for a chauffeured limo, with that price” as some anonymous, but unquestionably wise CSO has said)

So, to me, deciding whether to own a tool or to rent time from others is The Decision, but which brand of tool or MSSP to procure is secondary.

  1. PICK THE MODEL SIEM, MSSP, hybrid (such as staff augmentation, co-managed, or even both SIEM and MSSP)
  2. PICK THE BRAND(S) to shortlist.

Admittedly, some hybrid models are fairly mixed (“MSSP for perimeter, but Tier 3 alert triage in-house; internal network monitoring with a SIEM staffed by consultants, and internal user monitoring by FTEs” is a real example, BTW) and you may not have 100% certainty if going for a hybrid. Still, clarity on the degree of externalization is a must.

Otherwise, IMHO, you end up with a lot of wasted time evaluating choices that simply cannot work for you, for example:

  • If you know you cannot hire, don’t look at SIEM [SIEM needs people!]
  • If you cannot move your data outside the organization, don’t look at MSSPs
  • If you cannot hire AND cannot move data out, go with the “managed SIEM”

Therefore, I think it helps to narrow down the options using the coarse-grained model filter and then go sort out the providers/vendors.

Am I wrong here? Can you intelligently choose between a bunch of SIEM vendors, MSSPs and consulting firms doing managed SIEM if you don’t first settle on the model?

P.S. If you call us at Gartner with another “What is better, MSSP X or SIEM Y?” question, we will undoubtedly help you regardless of the above. Still, I think model for monitoring/management should precede the brand…

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
5 Effective Ways to Raise Privacy Awareness Thu, 18 Dec 2014 10:22:38 -0600 Have you made plans for Data Privacy Day (DPD) yet? What, you’ve never heard of DPD?  You can see more about it here. Or, have you heard about DPD, but you’ve not yet had time to plan for it? Well, I love doing information security and privacy awareness activities and events! I’ve been doing them for 2 ½ decades, and have written about them often, and included a listing of 250 awareness activities in my Managing an Information Security and Privacy Awareness and Training Program book.

Here are five of the ways that I’ve found to be very effective for raising privacy awareness throughout the years.

1)    Wheel of Security and Privacy Fortune: I was responsible for information security and privacy for a large financial company throughout the 1990s. One year we set up a “Wheel of Security and Privacy Fortune” outside the cafeteria for international computer security day. As people entered or left they would spin this huge wheel, and answer a question for the topic the clicker-pointer landed on. The questions incorporated our information security and privacy policies requirements and presented them in a way that related to work responsibilities and performing daily business activities. They were of varying degrees of difficulty, and we gave prizes of various sizes for correct answers; from candy-wrapped mints with a picture of our information security mascot on it all the way up to a gift certificate to the cafeteria for a full meal. This was a great success; well-received, plus we were able to establish some metrics based upon the participation and percentage of correct answers for how aware our personnel were about the various information security and privacy topics.

2)    Doing an Information Security and Privacy Contest. Several years ago I was responsible for creating and managing the Information Security and Privacy department and supporting activities for a large multi-national financial and healthcare organization.  For our annual awareness event, I worked with the lead corporate artist, describing a large number of security and privacy risks common within a business environment. I then asked him to take those risks and visually incorporate them into a poster showing a 3-story building, the side of which was cut away so that you could see all the workers and their work areas inside and the streets, grounds and parking area around the building. I sent the poster to each business department throughout the worldwide locations (around 130 – 140 of them). Each department team had a week to document a listing of each of the privacy and security risks they found in the poster and send back to me. I gave a prize to the team that correctly identified the most infractions; a pizza party during lunch for all their team members, recognition in the company magazine, and a photo of the winning team, along with their names and department. There was a fantastic response.  Approximately 93% of the business departments participated. If you want to see more about this event, and my measurable positive results, you can read about it here and you can get a kit to do this type of event at your organization here.

3)    Helping Employees Protect Their Own Information. One of my large healthcare insurance clients brings me into their facilities once a quarter and I provide a 30-minute discussion about a topic 4 to 5 times throughout the day. Employees can attend at a time that works best for them. I talk about how the employees can help protect their own personal information for specific situations. For example, one quarter I explained the risks of wireless home networks and how to secure them. Another quarter I talked about common identity theft causes, and now to protect against them. At the end of each talk, the information security officer and/or privacy officer then talks for around 5 minutes pointing out how the actions I described related to their own information security and privacy policies, and they point them to the specific related ones. We then leave around 10 minutes for questions. And, there are always great questions, related directly to the employees’ own experiences and personal lives. You can do something similar to effectively raise privacy awareness within your organization. Get in touch me and I can provide you with more information about this type of event.

4)    Regularly Providing Publications that Show Real-life Examples. Personnel love to know the information security incidents and privacy breaches that have happened in real life. And, there are no shortage of examples with the almost daily reports of incidents and breaches! Incorporating information about how information security incidents and privacy breaches could have been avoided by describing the controls and protections that would have prevented them is extremely useful to not only the readers, but raises their level of awareness. I’ve been providing my Protecting Information Journal to businesses for the past five years, and my subscribers have provided me with fabulous feedback about how successful it has been for them in raising their employees’ privacy (and security) awareness, and also how auditors have noted in audit reports their approval for them providing such awareness publications.

5)    Ask Your Governor to Officially Declare DPD for Your State. I just received word that Terry Branstad, Governor of Iowa, has once more agreed, at my request, to release a proclamation for January 28, 2015, to officially be Iowa Data Privacy Day. This will be the sixth year that I’ve successfully gotten the governors of Iowa to make such a proclamation. You can see the official certificate of proclamation for 2014 here. By making the day an official day in your state you can then plan public events, and get widespread media attention, for the need to address privacy by everyone in the public, as well as by all organizations that collect, use, share or otherwise access personal information. Consider asking your governor to make a similar proclamation for your state.

Also, I’m very excited about the activity I’m doing for that day; it will be televised on the Great Day morning show here in Iowa on January 28, 2014. I’ll be sure to write about it and point to the video of the segment when it is available.

For white papers to help keep the awareness levels high for those responsible for information security, see the Dell security site.

This was cross-posted from the Privacy Professor blog.

Copyright 2010 Respective Author at Infosec Island]]>
What Network Security Lessons Can We Learn from the Sony Attack? Wed, 17 Dec 2014 14:41:23 -0600 Hollywood is a place that can be driven mad by star-studded gossip, where the talk of the town is rarely private and where people are accustomed to their secrets not staying secret for very long. Yet, this state of play hasn’t made it any easier for the victims of last month's cyberattack against Sony, carried out by shadowy assailants calling themselves the Guardians of Peace.

As the public knows by now, it seems as though the attackers spared nothing in their initial leak of 27 gigabytes worth of data. They released the type of information that seems to be exposed after seemingly every corporate hack, from the personal information of employees to the company’s classified assets, which in this case even included the script for an upcoming James Bond film.

But that wasn’t all.

They also exposed the kind of information unique to an entertainment giant like Sony – the lurid Hollywood gossip, revelations of celebrity aliases and even off-the-record studio executives’ opinions about some of today's box office smashes.

Sony’s Imperfect Network Security History

So how could this have happened? Although the finger-pointing has been ongoing since the attackers revealed themselves to Sony employees at the end of November, what's clear is that the malware used by the Guardians of Peace was undetectable by antivirus software, and, as is often the case with attacks as broad as these, human error within Sony– passwords that were both easy to crack and stored in a file directory marked “passwords” – may also have been a factor.

Unfortunately, these aren't new criticisms of the company.

Sony's network security defenses, from poor access control to weak passwords, were so lacking in 2007 that an auditor told the company’s executive director of information security, "If you were a bank, you'd be out of business." Then there was the 2011 hack of Sony's Playstation network – an attack that was preceded two weeks earlier by the company laying off two employeeswho were responsible for network security.

In retrospect, it's easy to construct a seven-year trail of breadcrumbs back to Sony being hacked, and to allege that executives should have known they needed to do more to shield the company from attack. But, as it was suggested by the FBI's Joseph Demarest, assistant director of the agency's cyber division, the high sophistication of the attack proved to be just as much a factor as how porous the company's network security may have been.

He said, "The malware that was used would have slipped or probably gotten past 90 percent of [Internet] defenses that are out there today in private industry and [likely] challenged even state government."

Preventing the Next Great Hack

The massive Sony breach has shown, yet again, just how expeditious and ruthlessly efficient attackers today are. One minute, the network security fortress of a company like Sony is seemingly secure, and the next, documents and correspondence that were intended to be private are splashed across every news outlet. It should be more than enough to give network administrators significant pause, and make them wonder, "If it can happen to Sony, why couldn't it happen to me?"

Fortunately for network administrators, there is no shortage of steps they can take to prevent attackers from breaching their walls, and there are just as many ways to limit the damage in a worst-case scenario where hackers are able to make it inside.

We're talking about a defense-in-depth approach– a multi-layered, redundant strategy that seamlessly weaves together overlapping network security products, like strong VPNs and firewalls, with proven processes, like employee training and encryption protocol, to help network administrators defend against a range of threats looming right on their doorsteps. Additionally, if hackers do get in, layering security technologies can help mitigate the range and damage caused by the attack, making it more difficult for attackers to actually escape with sensitive information.

It's impossible for network administrators to know for sure they have the upper hand against attackers who seek to do them harm – their methods evolve too rapidly. But with a defense-in-depth strategy, network administrators at least know they have fail-safes in place should they become the next target.

This was cross-posted from the VPN HAUS blog.

Copyright 2010 Respective Author at Infosec Island]]>
Grinch Bug Could be Worse Than Shellshock, Says Experts Wed, 17 Dec 2014 12:38:56 -0600 Researchers discover a vulnerability in Linux operating systems dubbed Grinch Bug, which be exploited to give malicious hackers Root access to a computer system. The flaw resides in the authorization system in Linux which allows privilege escalation through the wheel.

A new privilege escalation bug similar to shellshock is giving Linux administrators sleepless nights just days after the Poodle, another deadly bug of 2014 resurfaced. The Grinch vulnerability, affecting all Linux based operating system potentially gives an attacker root access to a system according to Alert Logic who announced the Bug on Tuesday.

Grinch could be worse than ShellShock which plagued the tech world earlier in September. Shellshock is a coding mistake in Bash which affected all UNIX based operating system, including Linux and Mac. Like shellshock, Grinch potentially gives an attacker root access to a system without a password or Encryption keys.

Apparently, the problem lies with Linux authorization systems which allows for privilege escalation through the wheel, ultimately granting root access. Basically, a wheel is user account with special administrative rights in a UNIX system and controls the SU command, which allows the elevation of the current user to a super user.

A hacker could exploit the Grinch vulnerability by either modifying the registered user accounts in a wheel or by manipulating the Policy Kit (Polkit), a graphical User interface for managing privileged operations for ordinary users. “

    “Polkit can be used by privileged processes to decide if it should execute privileged operations on behalf of the requesting user. For directly executed tools, Polkit provides a setuid-root helper program called ‘’pkexec.’’ The hooks to ask the user for authorizations are well integrated into text environments and native in all major graphical environments” notes Alert Logic in a blog.

Whichever method the attacker uses, the goal is to gain root access to the system. With root access, the attacker has full administrative control and can install, modify programs or access files in any directory. The attacker is also able to remotely control the system implying they can create a replicating worm which can be spread to other systems in a blink of an eye.

Bash related vulnerabilities like the Grinch, are primarily a problem to retailers and e-commerce platforms like Amazon, who tend to favor Unix/Linus based operating systems. W3Tech estimate that 65% of web servers on the internet use Unix/Linux based operating systems.  Some smartphone running on Linux related OS could also be vulnerable to the Grinch bug.

Alert Logic is yet to witness the vulnerability begin exploited in the wild, nor is the flaw listed on the database of Community Emergency Response Team (CERT), according to Stephen Coty, Alert Logic’s director of threat research. However that does not imply exploiting the Grinch vulnerability is not parcatical in real life. The bug is easy to execute and manipulate.

Grinch Bug 2

Linux is yet to make an official statement about the vulnerability nor issue a patch, but since the Grinch is a flaw in Linux Kernel Architecture, Coy believes Linux Kernel development team is working on a solution. Meanwhile, users can mitigate against Grinch by using a logging software that flags off any unusual behavior in the system.

    “We find that possession of user logs and knowledge of your own environment are the best security content to help you navigate away from a bug like grinch. Know how your Linux administrator is installing packages and managing updates—do they use Yum to install packages? Firing on any of the newer methods like PKCon will be an immediate trigger of the exploit in use.” contines the post.

More importantly, understanding how your Linux system work will go a long way in tackling a bug like Grinch.

    “Possession of user logs and knowledge of your own environment are the best security content to help you navigate away from a bug like Grinch. Know how your Linux administrator is installing packages and managing updates—do they use Yum to install packages?” says Alert Logic.

Written by: Ali Qamar, Founder/Chief Editor at

This was cross-posted from the Security Affairs blog.

Copyright 2010 Respective Author at Infosec Island]]>
Top 10 Phishing Attacks of 2014 Tue, 16 Dec 2014 10:47:12 -0600 By Ronnie Tokazowski and Shyaam Sundhar

With December upon us and 2014 almost in the books, it’s a perfect time to take a look back at the year that was, from a phishing standpoint of course. If you’ve been following this blog, you know that we are constantly analyzing phishing emails received and reported to us by PhishMe employees. What was the most interesting phishing trend we observed in 2014? While attackers are loading up their phishing emails with new malware all the time, the majority of their phishing emails use stale, recycled content.


Given this trend, a list of the best phishing emails of 2014 may not sound like a riveting exercise, but just because they reused content doesn’t mean we didn’t receive a number of interesting phishing attacks:

10. Fax notice phishing
Fax machines may seem like something you only see on VH1’s “I Love the 90s” but fax notices are a popular theme for phishing emails. Many of the attacks discussed in this post used fax-themed phishing emails, and we recently received fax-themed attacks that sent updated versions of Dyre and an attack that featured Upatre malware, discussed in this whitepaper. In the case of the Upatre Trojan downloader, the phishing content was the same as any generic eFax phish, but the technical methods behind the malware were cutting-edge.

9. .NET Keylogger
This attack started with a standard banking-themed phish with a .zip attachment. The malware turned out to be a .NET keylogger that had the capability to scrape passwords stored in web browsers and other forms of media. Pretty deadly.

8. Message from attorney
Earlier this Spring we received a phishing email purporting to be from a neighbor who was sending a .zip file containing sensitive information from the recipient’s attorney. Why would your neighbor email you a .zip file from an attorney? It’s a valid question, and an important one to ask, because the .zip file contained a malicious executable.

7. Ransomware phishing
Back in May, we received a round of phishing that used fake MAILER-DAEMON email delivery failure notices to trick recipients into running an executable that installed a variant of Cryptolocker. A few weeks later, we received a fax-themed phish that led recipients to Cryptowall. Upon examining the bitcoin wallets of the attackers, we found they had collected over $130k in ransom payments.

6. ADP themed email with PDF exploit
Since they allow the attacker to exercise a sense of authority, and stir up emotions such as urgency, fear, and greed – payroll-themed phishing emails are extremely common. What was unique about this ADP phish? It contained a PDF exploit that injected shellcode into Reader. To complicate analysis, the attackers used several layers of zlib compression and difficult-to-track variable names.

5. IRS data-entry phish
Death, taxes, and phishing emails that spoof the IRS. Spoofing our nation’s tax collection agency is a tried and true tactic, and this phishing email from August played on the recipient’s excitement to receive a tax refund by linking to a page for the recipient to specify payment information for refund, provided he/she enters login credentials. After performing OSINT analysis on the phishing page, we found the same text had been used way back in 2006.

4. Slava Ukraini phish
Back in July, a new strain of Dyre appeared, packed as a zip file containing a screensaver file. The malware was interesting, but the phishing email? It was a simple fax notice, sent to one of our senior executives here at PhishMe.

3. Compromised .edu domain serving ZeuS
Near the end of October, we received a pretty ordinary phishing email with a .zip attachment supposedly containing information about a payment. The attachment contained a form of Zeus. Why does it make the list? The attackers sent the email from a compromised .edu domain. The trusted nature of an educational institution’s domain, and the generous amount of bandwidth those domains usually have provide attackers with an appealing platform for delivering malware.

2. Dropbox phishing
The rise of 3rd-party cloud services like Dropbox has provided attackers with an interesting new method to deliver nasty stuff through your network. In a round of emails last June that served as the precursor to Dyre, we received phishing emails that linked to a supposed invoice on Dropbox. The Dropbox link itself was legitimate, only it led to a .zip file containing a .scr, not an invoice. Dropbox has been quick to shut down this type of abuse, but it’s proven to be great method for attackers to get past spam filters. Dropbox use is so pervasive that most organizations won’t block its links. A few weeks later we would see Dropbox links abused in targeted attacks against the Taiwanese government.

1. Dyre malware email
The most notorious phishing email of 2014 seemed innocent enough upon first glance. We actually received two emails containing the then unknown malware, with both of them pointing to links from a third-party file sharing service, Cubby. The content of the emails itself was bland, one simply directed the recipient to a link to an invoice, while the other was a bit more extensive, directing the recipient to a link to learn more about a failed tax payment. Both of these led to the now notorious Dyre malware, a remote access Trojan (RAT) that has targeted banking information and customer data. Dyre’s impact has been widespread enough to catch the attention of the US CERT.

If we learned only one thing about phishing in 2014, it should be that phishing attackers repeat themselves. This can prove useful to help us defend against phishing in the future. While the security industry has traditionally focused on bad IP addresses and malware when it comes to phishing, we ought to be focused on tactics, techniques, and protocol. Focusing on email content, headers, and URLs to recognize patterns and take preventive action will add another layer of phishing defense.

This was cross-posted from the PhishMe blog.

Copyright 2010 Respective Author at Infosec Island]]>
Debunking The Biggest Cyber Security Myths for Businesses Tue, 16 Dec 2014 10:34:35 -0600 A glimpse at the world of cyber security can be a frightening one. Stories revolving around security breaches hitting major companies, like Target and Home Depot, can fill any business executive with trepidation.

As a result, companies both large and small can spend considerable sums improving their security measures, trying to prevent the kind of attacks that can set their companies back months or years, if it doesn’t ruin it completely.

With so much attention being paid to security, there’s a lot of information floating around—some of it not in the least bit true. If a company wants to enhance their IT security, it pays to separate the facts from the fiction.

Here are just a few of the biggest cyber security myths that businesses still hold to:

Myth 1: Hackers are the only threat you need to worry about.

Fact: While small businesses definitely shouldn’t downplay the impact hackers could have on their operations, if all their time, resources and energy is spent focusing on them, they’ll leave themselves vulnerable to other sources of cyber danger.

Recently, it’s been revealed that many governments routinely monitor private citizens and businesses, essentially collecting data that might be considered sensitive. Foreign governments may also spy on other countries, sending attackers of their own to sabotage companies and institutions.

Some of the threats may even be internal in nature, as careless employees may unwittingly introduce various security threats through the use of their smartphones and work. This danger has become more pronounced with the widespread adoption of BYOD in the workplace.

Myth 2: All security breaches can be prevented.

Fact: After stories of yet another data breach arise, it’s easy to look at what could have been done to prevent the breach from happening in the first place. While it is certainly worth it for a business to improve security, holding to the idea that every attack can be prevented falls short of reality.

No matter how extensive a business’ network security is, attacks will get through at some point, and the question isn’t if but when. In many cases, it likely has already happened without an organization even knowing it.

The best a business can do is make it as difficult as they can to infiltrate the most important systems and to develop an effective plan for responding and recovering after an attack happens.

Myth 3: Small businesses don’t make a worthwhile target.

Fact: With so many major corporations out there, why would any hacker want to focus on a small business? The general thinking is that since smaller companies have fewer resources and less money, they’ll be ignored as attackers go after the big businesses.

The truth is that every company is a potential target no matter their size. In fact, small businesses may be an even more tempting target, since hackers are aware they don’t have the same resources to fight back. At the same time, the employees of small businesses may inadvertently make the company more vulnerable to outside attacks. It’s best not to assume hackers will simply pass the company by.

Myth 4: Predictive systems are guaranteed to discover the next attack.

Fact: Many companies are turning to big data and machine learning in the form of predictive systems as a way to boost security and figure out when and how the next cyber attack will happen.

While this strategy can certainly do a lot in improving a company’s security efforts, it isn’t foolproof. Predictive systems have to rely on past data to come up with their conclusions, meaning it’s more difficult to predict a new style of attack.

Hackers are also experienced at running deceptive strategies of their own intended to fool systems and get around security measures. Predictive systems don’t change this in any way.

Myth 5: Security is only the IT department’s responsibility.

Fact: While it is true that IT workers will likely handle the bulk of the duties associated with a cyber security, that doesn’t mean the rest of the company is off the hook. As mentioned earlier, employee behavior may increase security risks.

Addressing that issue requires changes in company culture and routine employee training. It’s also management’s responsibility to make sure other companies they work with have adequate security. In other words, it’s a company-wide responsibility to deal with security challenges, not just IT.

With these facts in mind, small businesses will be able to handle the ever-evolving nature of security threats out there. While attacks can and will still happen, organizations will be in a better position to respond and minimize the damage.

In today’s environment, a quick response and recovery can mean the difference between continued growth and disaster.

This was cross-posted from Tripwire's The State of Security blog.

About the Author: Rick Delgado is a freelancer tech writer and commentator. He enjoys writing about new technologies and trends, and how they can help us. Rick occasionally writes for several tech companies and industry publications.


Copyright 2010 Respective Author at Infosec Island]]>
How To Exit an MSSP Relationship? Mon, 15 Dec 2014 11:15:28 -0600 Let me touch a painful question: when to leave your managed security services provider? While we have the research on cloud exit criteria (see “Devising a Cloud Exit Strategy: Proper Planning Prevents Poor Performance”), wouldn’t be nice to have a clear, agreed-upon list of factors for when to leave your MSSP?

For example, our cloud exit document has such gems as “change of internal leadership, strategy or corporate direction”, “lack of support”, “repeated or prolonged outages” and even “data, security or privacy breach” – do you think these apply to MSSP relationships as well?

And then there is that elephant in the room…



… FAILURE TO DETECT AN INTRUSION. Or, an extra-idiotic version of the same: failure to detect a basic, noisy pentest that uses commodity tools and no pretenses of stealth?

[BTW, this is only an MSSP failure if the MSSP was given access to necessary log data; if not, it is a client failure]

Not enough? How about systematically failing to detect attacks before the in-house team (that… ahem …outsourced attack detection to said MSSP) actually sees them?

Still not enough? How about gross failures on system change SLA (e.g. days instead of hours), failure to detect attacks, failure to refine rules leading to excessive alerting and failure to keep client’s regulated data safe?

In any case, when signing a contract, think “how can you terminate?” When onboarding a provider, think “how can you off-board?” A detailed departure plan is a must for any provider relationship, but MSSP case also has unique twists…

Any thoughts? Have you left your MSSP in the dust over these or other reasons? Have your switched providers or brought the processes in-house? What would it take you to leave?

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
The 3 Necessary Elements for Effective Information Security Management Mon, 15 Dec 2014 11:11:40 -0600 Seeing all these really bad information security incidents and privacy breaches, often daily, are so disappointing.  Let’s consider these four in particular.

  1. The Sony hack that seems to continue to get worse as more details are reported.
  2. An ER nurse using the credit cards of patients.
  3. Breaches of Midwest Women’s Healthcare patient records due to poor disposal practices at the Research Hospital.
  4. TD Bank’s outsourced vendor losing two backup tapes containing data about 260,000 of their customers.

And the list could continue for pages.

These incidents, and most others, probably could have been prevented if an effective information security and privacy management program existed that was built around three primary core elements:

  • Risk management
  • Documented information security and privacy policies and procedures
  • Education including regular training and ongoing awareness activities and communications

Risk Management

In each of these cases a risk assessment, that is part of a wider risk management program, would have identified significant risks in each of these four examples. Here is just one example of a risk that could have been mitigated for each corresponding example from above that should have been identified prior to the breach:

  1. Sony would have identified that they had vulnerabilities where remote access occurred into their networks and could have established stronger controls in addition to implementing intrusion detection and prevention systems.
  2. The ER could have implemented digital monitoring for staff in addition to spot audits and background checks to help identify when a staff member was stealing from a patient.
  3. A risk assessment of Research Hospital facility practices would have identified poor disposal of print records.
  4. If TD Bank had established a vendor security and privacy program oversight management program it could have caught any lax practices in the vendor.

Policies and Procedures

In each of these cases having documented policies and procedures, would have established a reference for all workers to see what was expected with regard to effectively and consistently protecting information during the course of normal work activities throughout the enterprise, and would have established the requirements and responsibilities that workers need to know. Here is just one example of a risk that could have been mitigated for each corresponding example from above that should have been identified prior to the breach:

  1. Sony could have established document policies and supporting procedures to NOT allow clear text user IDs and passwords to be stored in digital files. (Why the heck were they doing this horrible high-risk action!?)
  2. The ER could have implemented policies to secure all patient valuables within in-room lockers that staff could not access.
  3. Research Hospital could have had policies and procedures for finely shredding all documents to be disposed that contained confidential information.
  4. TD Bank could have had a policy requiring all backup tapes to be encrypted prior to release to the storage vendor.


  1. Sony should have provided information security and privacy training to all personnel, and sent regular and frequent reminded to all personnel reminding them to protect all types of mission critical and valuable intellectual property to keep it from being inappropriately released.
  2. The ER should have provided information security and privacy training to all personnel, and sent regular and frequent reminded to all personnel reminding them to protect patient information, to be aware of what others are doing with patient possessions, and how to report suspicious activities.
  3. Research Hospital should have provided secure disposal training to all personnel who dispose of information in any form, and sent regular and frequent reminders to all personnel reminding them to completely destroy any type of media with sensitive information prior to throwing it away.
  4. TD Bank should have ensured their vendors and other outsourced entities provided information security and privacy training to all their personnel, and that they sent regular and frequent reminding them how to secure the information that has been entrusted to them by their clients.

Bottom line for organizations of all sizes…

In addition to many really huge organizations, I’ve worked with hundreds of small to midsize businesses over the years. I’ve seen a large portion of the small to midsize organizations completely omitting not just one, but two and in many situations all three of these core elements.

Every type of organization, of all sizes, needs to build their information security and privacy program around the three core elements of:

1) Risk management;

2) Policies and procedures; and

3) Education.

If they don’t, they are going to leave themselves vulnerable to potential significant and possibly business-killing information security incidents and privacy breaches.

This was cross-posted from the Privacy Professor blog.

Copyright 2010 Respective Author at Infosec Island]]>
Webcast: Using Global Intelligence Data to Prevent Online Fraud and Cybercrime Fri, 12 Dec 2014 05:17:35 -0600 Please join ThreatMetrix and SecurityWeek on Thursday, Dec. 18th, 2014 at 1PM ET for a Live Webcast.

Fraud and other forms of cybercrime continue to plague all companies with an online presence, with sophisticated cybercriminals launching attacks on logins, payments, and account origination. Security and fraud prevention professionals are challenged to keep pace with evolving trends and protect against attacks that threaten customers, employees, revenues and data – all without impeding user experience.

Webcast Sponsored by ThreatMetrixKnowing the latest attack trends can help focus your detection and prevention resources to reduce risk and losses.

Attend this webinar to learn how to leverage findings in The ThreatMetrix Cybercrime Report, based on actual cybercrime attacks detected during real-time analysis and interdiction of fraudulent account logins, online payments and registrations. This report gathers data from over 850 million monthly transactions, including findings from this year’s Black Friday – Cyber Monday weekend.

Topics to be discussed include:

• Attacks by transaction type and industry

• Trends in top attack methods

• Analysis of mobile vs. desktop attacks

• Why global shared intelligence is essential

Register Now

Copyright 2010 Respective Author at Infosec Island]]>
Depends Thu, 11 Dec 2014 13:13:06 -0600 I've always had a problem with compliance, for a very simple reason: compliance is generally a binary state, whereas the real world is not. Nobody wants to hear that you're a "little bit compliant," and yet that's what most of us are.

Compliance surveys generally contain questions like this:

Q. Do you use full disk encryption?

A. Well, that depends. Some of our people are using full disk encryption on their laptops. They probably have that password synched to their Windows password, so I'm not sure how much good encryption would do if the laptops were stolen. We talked about doing full disk encryption on our servers. I think some of the newest ones have it. The rest will be replaced during the next hardware refresh, which I think is scheduled for 2016.

Q. So is that a yes, or a no?

A. Fine, I'll just say yes.

Or they might ask:

Q. Do you have a process for disabling user access?

A. It depends. We have a process written down in this here filing cabinet, but we don't know how many of our admins are using it. Then again, it could be a pretty lame process, but if you're an auditor asking whether we have one, the answer is yes.

Or even:

Q. Do you have a web application firewall?

A. No, I don't think so. ... Oh, we do? That's news to me. Okay, somewhere we apparently have a WAF. Wait, it's Palo Alto? Okay, whatever.

Q. Do you test all your applications for vulnerabilities?

A. That depends on what your definitions are of "test," "applications," and "vulnerabilities." Do we test the applications? Yes, using different methods. Does Nessus count? Do we test for all the vulnerabilities? Probably not. How often do we test them? Well, the ones in development get tested before release, unless it's an emergency fix, in which case we don't test it. The ones not in development -- that we know about -- might get tested once every three years. So I'd give that a definite yes.

The state of compliance is both murky and dynamic: anything you say you're doing right now might change next week. Can you get away with percentages of compliance? Yes, if you have something to count: "83% of our servers are compliant with the patch level requirements." But for all the rest, you have to decide what the definition of "is" is.

Compliance assessments are really only as good as the assessor and the staff they're working with, along with the ability to measure objectively, not just answer questions. And I wouldn't put too much faith in surveys, because whoever is answering them will be motivated to put the best possible spin on the binary answer. It's easier to say "Yes" with your fingers crossed behind your back, or with a secret caveat, than to have the word "No" out where someone can see it.

In fact, your compliance question could be "Bro, do you even?" and it would probably be as useful.

This was cross-posted from the Idoneous Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Security Backdoors are Bad News—But Some Lawmakers Are Taking Action to Close Them Thu, 11 Dec 2014 13:09:52 -0600 As many privacy advocates have pointed out recently, it looks like some people in the federal government are intent on reviving the failed Crypto Wars of the 90s. And despite recent assurances, the National Institute of Standards and Technology (NIST) still hasn’t done enough to address NSA’s involvement in the creation of encryption standards. Fortunately, some lawmakers are taking security seriously.

You may remember that back in June, the House of Representatives voted overwhelmingly  (293-123) to approve the Massie-Lofgren amendment to the 2015 Department of Defense Appropriations bill, which would have defunded the NSA’s attempts to build security backdoors into products and services. Although the amendment may have been stripped from the final appropriations bill, all’s not lost. On Thursday, Senator Ron Wyden introduced some of the same language from the amendment as the Secure Data Act of 2014 [pdf].

The Secure Data Act starts to address the problem of backdoors by prohibiting any agency from “mandate[ing] that a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product or service, or to allow the physical search of such product, by any agency.” Representative Lofgren has introduced a companion bill in the House, co-sponsored by 4 Republicans and 5 Democrats.

The legislation isn’t comprehensive, of course. As some have pointed out, it only prohibits agencies from requiring a company to build a backdoor. The NSA can still do its best to convince companies to do so voluntarily. And sometimes, the NSA’s “best convincing” is a $10 million contract with a security firm like RSA.

The legislation also doesn’t change the Communications Assistance for Law Enforcement Act (CALEA.) CALEA, passed in 1994, is a law that forced telephone companies to redesign their network architectures to make it easier for law enforcement to wiretap telephone calls. In 2006, the D.C. Circuit upheld the FCC's reinterpretation of CALEA to also include facilities-based broadband Internet access and VoIP service, although it doesn't apply to cell phone manufacturers.

That being said, this legislation is a good thing. First and foremost, it’s important to remind the incoming (and overwhelmingly Republican) Congress that NSA spying isn’t a partisan issue. The bipartisan Massie-Lofgren amendment garnered votes from Republicans, Democrats, and Independents. And like the Massie-Lofgren amendment, Democrats and Republicans are already supporting this legislation. While it’s not likely that Congress will touch the Secure Data Act this term, by introducing this legislation Senator Wyden and Representative Lofgren have made it clear that they will continue to push for privacy, civil liberties—and strong security.

This was cross-posted from the EFF's DeepLinks blog.Originally posted Dec. 9. Copyright 2010 Respective Author at Infosec Island]]>
POODLE Redux: Now Affecting Some TLS Implementations Wed, 10 Dec 2014 12:27:52 -0600 By: Ken Westin

The POODLE vulnerability (CVE-2014-3566 and CVE-2014-8730) that we saw in October affecting SSL v3, has been found to also be present in some implementations of TLS.

Although vendors of tools that were vulnerable to the flaw quickly fixed their systems to rely on TLS vs SSL v3, a problem still exists due to TLS padding being a subset of SSLv3’s so that decoding functions for SSLv3 can be used with TLS, as well.

This introduces a vulnerability in TLS allowing a POODLE type of attack to be successful as the same padding issues would be present in TLS connections if the same decoding functions are in use.

CVE-2014-8730 has been created for tracking the vulnerability, but little information has been made available yet. F5 who was notified their products were affected by the vulnerability posted additional information and remediation guidance on their website.

Adam Langley who discovered the vulnerability posted on his blog regarding the issue:

This seems like a good moment to reiterate that everything less than TLS 1.2 with an AEAD cipher suite is cryptographically broken. An IETF draft to prohibit RC4 is in Last Call at the moment but it would be wrong to believe that RC4 is uniquely bad. While RC4 is fundamentally broken and no implementation can save it, attacks against MtE-CBC ciphers have repeatedly been shown to be far more practical. Thankfully, TLS 1.2 support is about to hit 50% at the time of writing.

Detection of this flaw requires connecting to the server with a client modified to send unexpected pad data.  Servers which properly implement the specification will report an error while vulnerable systems will not notice the improper padding.  It should be noted that while the TLSv1.0 specifications do not enforce the verification of pad data, many implementations do it anyway meaning that they would not be affected by this attack.

 This was cross-posted from Tripwire's The State of Security blog.


Copyright 2010 Respective Author at Infosec Island]]>
Hackers Leak Scripts, Celebrity Phones and Aliases at Sony Pictures Entertainment Wed, 10 Dec 2014 12:14:49 -0600 GOP released a new archive of Sony Pictures Entertainment confidential data including private information of employees, celebrity phone numbers, film scripts and many more.

The Sony Pictures data breach is becoming a never ending history, the GOP is leaking company data and much more since the attack while security firms are providing the details of their analysis of the wiper malware used in the attack. The damage is significant for the company and a significant impact on its employees and all those who have maintained a working relationship with it.

The hackers after the disclosure online of Sony Pictures Entertainment’ sensitive data, started to use them to treat company staff and to prepare further attacks. It is news of the day that experts at Kaspersky Lab have detected a new strain of Destover malware digitally signed with Sony certificates stolen in the cyber attacks, a technique that could allow the group to hit many other targets avoiding detection of not upgraded defense systems.

The Guardians of Peace (GoP) that took responsibility for the massive attack against Sony Pictures Entertainment, have released a new batch of strictly confidential data including private information of its employees, celebrity phone numbers and their travel aliases, upcoming film scripts, film budgets and many more.

Earlier this week, the GOP has released online several hundred gigabytes data that according TheHackerNews portal includes:

  • Movies’ Financial Data – a large file detailed financial data which includes revenues and budget costs, for all of Sony’s recent films.
  • Unreleased Movie Scripts – unreleased scripts for upcoming movies, including The Wedding Ringer with Kevin Hart (2015), Paul Blart Mall Cop 2 (2015), the animated film Pixels (2015) and the animated filmSausage Party with Seth Rogen and Kristen Wiig, have also been released.
  • Celebrities’ Personal Data – a huge dump of information related to celebrities’ personal data, including aliases formerly used by famous actors has also been released which is really embarrassing for the company. Brad Pitt‘s phone number is also listed, which could be of his assistant. Seth Rogen and Emma Stone’s personal email addresses, as well as Jesse Eisenberg’s home address have also been leaked among a lot of emails and phone numbers for lesser-known celebrities.

Release Schedules – a number of files detailed confidential movie release schedules, both for Sony Pictures Entertainment and Sony-owned Columbia Pictures.

  • Invoices – a folder contains hundreds of invoices related to various movie projects, including Skyfall, Captain Phillips and Smurfs 2.
  • Bank Accounts – there are files which contain dozens of bank accounts, both personal and belonging to Sony corporation.
  • Sony’s Promotional Activities – a bill detailed Sony Pictures Entertainment expenditure when promoting movies, which includes Tom Hanks, Naomie Harris’ hair styling bill, the Skyfall London premiere in 2012, along with bills that Sony spent in distributing gifts.

The situation is becoming embarrassing and the information leaked are damaging the reputation of the company. According to a post published by the Reuters Agency, the economic impact of the attack against Sony Pictures Entertainment could be greater than$100 million. Experts who have analyzed the economic impact of previous attacks told Reuters that though the cost would be less than the $171 million Sony estimated when its Playstation Network was hacked in 2011.

“The attack, believed to be the worst of its type on a company on U.S. soil, also hits Sony’s reputation for a perceived failure to safeguard information, said Jim Lewis, senior fellow at the Center for Strategic and International Studies.”Usually, people get over it, but it does have a short-term effect,” said Lewis, who estimated costs for Sony could stretch to $100 million.” reports the Reuters.

The estimation result from the evaluation of the cost of data breaches occurred in the past. The cost includes investigation activities, loss of trade secrets, computer repair or replacement, and steps to prevent a future attack. But we have to consider damaged to the reputation and overall lost productivity while operations were interrupted.

While Sony Pictures Entertainment has declined to estimate costs, confirming that the company is still assessing the impact.

This was cross-posted from the Security Affairs blog.

Copyright 2010 Respective Author at Infosec Island]]>
Significant Change And Periodic Tue, 09 Dec 2014 11:34:36 -0600 No words or phrases in the PCI standards elicit more comments and questions than “significant change”, “periodic” and “periodically”.

So what do these mean?  Whatever you want to define them to mean as it is up to each organization to come up with formal definitions.  Those definitions should be based on your organization’s risk assessment.

Here are some suggestions as to appropriate definitions.

Significant Change

Significant changes are those changes that could impact or affect the security of your cardholder data environment (CDE).  Examples of significant changes are:

  • Changing devices such as firewalls, routers, switches and servers. Going from Cisco to Checkpoint firewalls for example is typically understood as a significant change.  However, people always question this concept particularly when going from a Cisco ASA 5505 firewall to an ASA 5520 or moving a virtual machine from one cluster to another.  The problem is that these moves can potentially introduce new vulnerabilities, network paths or even errors that would go unknown until the next vulnerability scan and penetration test.  And your luck would be that those tests are months away, not just a few days.
  • Changes to payment applications. This should be obvious, but I cannot tell you how many people argue the point on changes to applications.  Yet, application changes are possibly the biggest changes that can affect security.  Not only should applications be vulnerability scanned and penetration tested before being put into production, but code review and/or automated code scanning should be performed as well.  If any vulnerabilities are found, they must be corrected or mitigated before the application goes into production.
  • Upgrades or changes in operating systems. Upgrades and changes in operating systems should also be obvious as significant changes.  However, I have run into network and system administrators that want to split hairs over the impact of OS changes.  In my opinion, going from one version of an OS to another is just as significant as changing OSes.
  • Patching of operating systems or applications. While I do not think that patching necessarily results in a significant change, there are some patches such as updates to critical services such as .NET or the IP stack that should be considered significant.  If you are properly working through requirement 6.1 (6.2 in PCI DSS v2) for patch management, you should take this into consideration and indicate if vulnerability scanning and penetration testing are required after any particular patch cycle because of the nature of any of the patches being applied.
  • Network changes. Any time you change the network you should consider that a significant change regardless of how “minor” the change might appear.  Networks can be like puzzles and the movement of devices or wires can result in unintended paths being opened as a result.

I have a lot of clients that have an indicator in their change management system or enter “Significant Change” in the change comments for flagging significant changes.  That way they can try and coordinate significant changes with their scheduled vulnerability scanning and penetration testing.  It does not always work out, but they are trying to make an attempt at minimizing the number of special scans and tests that are performed.  But such an approach also has a side benefit when it comes time to do their PCI assessment as they can call up all significant changes and those can be tied to the vulnerability scans and penetration tests.

I would see this list as the bare minimum of significant changes.  As I stated earlier, it is up to your organization to develop your own definition of what constitutes a significant change.

Periodic and Periodically

Branden Williams was on a Podcast shortly after the PCI DSS v3 was released and made a comment that he felt that the number of occurrences for the words “periodic” or “periodically” were higher in the new version of the PCI DSS than in the previous version.  That got me thinking so I went and checked it out.  Based on my analysis, these words occur a total of 20 times in the PCI DSS v3 with 17 of those occurrences in the requirements/tests.  That is a 150% total increase over v2 and an increase of 113% in the requirements/tests.

First off, just to shatter some people’s perception of the word, “periodic” does not equate to “annual”.  Yes, there may be instances where an activity can occur annually and still meet PCI DSS compliance.  But that is likely a rare occurrence for all but the smallest organizations and is definitely not how the Council has defined it.

The Council uses the words “periodic” and “periodically” to reflect that an organization should be following the results of their risk assessment to determine how often or “periodically” they should perform a certain activity.  For some organizations, that might happen to work out to be annually.  But for most organizations it will work out to be something more often than annually.

So what requirements specific a periodic time period?  Here are some of the more notable occurrences.

  • 5.1.2 For systems considered to be not commonly affected by malicious software, perform periodic evaluations to identify and evaluate evolving malware threats in order to confirm whether such systems continue to not require anti-virus software.

Typically this would be done annually, but forensic analysis of breaches has indicated that it needs to be done more often, particularly with Linux and other Unix derivatives. Based on threats semi-annual or even quarterly reviews may be needed for systems you believe to not warrant an anti-virus solution.

  • 5.2 Ensure that all anti-virus mechanisms are maintained as follows: Are kept current, Perform periodic scans, Generate audit logs which are retained per PCI DSS Requirement 10.7.

Periodic scanning is always an issue with servers but, surprisingly, even more so with workstations. In my opinion, at a minimum, scans for viruses and malware should be done at least weekly.  This might need to be done daily if the systems are particularly at risk such as in call centers where the workstations my go to the Internet to be able to access competitor sales offerings.

  • 8.2.4.b Additional testing procedure for service providers: Review internal processes and customer/user documentation to verify that: Non-consumer user passwords are required to change periodically; and Non-consumer users are given guidance as to when, and under what circumstances, passwords must change.

This requirement pairs with 8.6.2 which requires service providers with remote access to customers’ systems to not use the same credentials for each customer. A number of recent breaches have pointed out the issue such a practice can lead.  Not only are different credentials needed by the password for those credentials needs to change periodically, typically every 90 days.  This will likely spur the sales of enterprise credential vaults and similar solutions in the service provider ranks.

But it is not just service provider’s credentials; it is also their customers’ credentials.  Service providers need to advise their customers to change their passwords periodically as well.  And that should also be at 90 day intervals at a minimum.

  • 9.7 Obtain and examine the policy for controlling storage and maintenance of all media and verify that the policy requires periodic media inventories.

For this requirement, the PCI DSS already provides a required timeframe of at least annually.

  • 9.8 Examine the periodic media destruction policy and verify that it covers all media and defines requirements for the following:

Periodic here typically means quarterly or even monthly if you have the volume of media to be destroyed. The key though is to secure the media until it is destroyed.

  • 9.9 Examine documented policies and procedures to verify they include: Maintaining a list of devices, Periodically inspecting devices to look for tampering or substitution, Training personnel to be aware of suspicious behavior and to report tampering or substitution of devices.

Here periodic means at least daily, if not more often. I have clients that examine their points of interaction (POI) at every management shift change which works out to three or four times a day.  Given the POI is becoming the primary target of attacks, this will only become more important as time goes on given the current paradigm.

  • 9.9.2 Periodically inspect device surfaces to detect tampering (for example, addition of card skimmers to devices), or substitution (for example, by checking the serial number or other device characteristics to verify it has not been swapped with a fraudulent device).

Again, periodic means at least daily, if not more often. I have clients that examine their points of interaction (POI) at every management shift change which works out to three or four times a day.  Given the POI is becoming the primary target of attacks, this will only become more important as time goes on given the current paradigm.

  • 10.6.2 Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.

This requirement will cause more heartburn for organizations once everyone is on the same page as the Council. Naysayers claim it cannot possibly mean what it says it means.  Yet at the 2013 Community Meeting the Council explained that yes, in fact, they did mean all systems and devices that are not in-scope for PCI compliance.

However, if you have been producing a risk assessment that does not cover your not in scope systems, you will not be able to comply with this requirement.  Without that risk assessment there will be no way to justify the frequency of how often you review log data for systems not in scope.

If anything, this requirement will drive the expansion of a lot of system information and event management (SIEM) solutions as most were bought and sized only for systems that were in-scope for PCI compliance.

·         12.10.4 Verify through observation, review of policies, and interviews of responsible personnel that staff with responsibilities for security breach response are periodically trained.

It amazes me the number of organizations that claim to not have had an incident in the last year, even a virus or malware outbreak. Either they were totally dealt with by their anti-virus solution (hard to believe) or I am not talking to the people that deal with these issues (probably more likely).  As a result, testing (which can satisfy this training requirement) is only being done annually just like business continuity plan testing.

Given the ever increasing amount of threats, this sort of training needs to be done more often than just annually.  Organizations should be at least testing their incident response plan on a quarterly basis so that people keep their skills up as well we just exercising the plan and finding any gaps or processes that need adjustment.

Hopefully we are now all on the same page with these terms.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Trouble with the Endpoint Tue, 09 Dec 2014 11:27:47 -0600 Remote Access Endpoint

Much to the dismay of network administrators, IT security today is complex and multi-faceted, from the varied attack vectors to the different types of attackers themselves. But there is always one constant: the endpoint. When those endpoints are attacked, and end users cannot access services, data and applications, it is futile for a business to even host and offer them.

The client, that is the device, not the human being using it, has undergone enormous changes over the last decade, thereby putting the burden on IT professionals to evolve their networks accordingly. The PC, with Windows 95, was the starting point. Next came myriad Microsoft operating system updates, followed by new form factors like tablets and smart phones, which introduced a whole new dimension.

With each new client, the applications changed as well. Browsers and apps opened up unfamiliar, sometimes encrypted, and sometimes proprietary, data channels, from the Internet right down to the file system. And of course, attackers have kept track of those changes and adapted their methods accordingly over the years.

To cope with these ever-evolving forms of attack, network administrators developed innovative defense mechanisms. Classic anti-virus tools were followed by sandboxes that tried to detect and block malware by offering these programs a limited, simulated runtime environment. The most recent approach uses micro-VMs, which try to contain malware within the kernel process level.

Additionally, businesses now use a whole arsenal of security measures, ranging from the humble password to two-factor authentication, firewalls and encryption, to name but a few. And nothing is wrong with these measures. After all, an endpoint that uses anti-virus software is better protected than one without it. But the question is: How much better?

The problem is, enterprises often do not realize that technology alone will not save them. Businesses need to know that their combined technical barriers, no matter how recent and well maintained they might be, are far from impregnable, even under perfect conditions. It doesn’t matter which hindrances network administrators place in the path of attackers. They will eventually find a way to bypass them. And in some cases, their whole IT security budget could be wasted on a suite of diverse defense mechanisms.

The only solution is redundancy – a defense-in-depth approach that uses a combination of firewalls, VPNs, intrusion detection systems and common sense policies to govern employee remote access behavior. This type of framework will go a long way in keeping possible attack vectors at bay. It can’t be said often enough, so here it is again: Security is a process, not a product.

End-to-end encryption alone won’t save you. For example, a Trojan could gain access to the local network through an infected smartphone or a USB stick and intercept the password keystrokes right as they happen. In a worst-case scenario, the cryptography might even hinder other security tools from detecting suspicious activities on the network.

No IT-based measure alone can account for human fallibility – they won’t help if one of your employees leaves a work device out in the open, where it could be stolen, or accidentally exposes a password through a phishing scheme. The level of security is always defined through the weakest link, not through the largest budget.

This was cross-posted from the VPN HAUS blog.

Copyright 2010 Respective Author at Infosec Island]]>