Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Avoid this Common Privacy Choice Mistake https://www.infosecisland.com/blogview/23942-Avoid-this-Common-Privacy-Choice-Mistake.html https://www.infosecisland.com/blogview/23942-Avoid-this-Common-Privacy-Choice-Mistake.html Tue, 26 Aug 2014 09:46:14 -0500 Many marketing professionals have a common temptation; they want to send as many marketing messages to as many people as possible, and they would love to send it to all folks who have ever been customers or clients of their business, and often times actually want to simply send to everyone whose email address they can obtain in any way.

Privacy professionals make many efforts to guide marketers on what is acceptable and not acceptable. After all, with the CAN SPAM Act, and many other laws and regulations, prohibiting marketers from sending marketing information to those who didn’t ask for them or didn’t opt-in to receiving them, sending unwanted marketing messages can get the business in trouble. And also consider the privacy notice posted on your business website; have you read it? Many of these notices state they will not send marketing messages to anyone unless they have opted-in to receiving them. Does yours state this or something similar?

Providing Choice is a Key Privacy Principle

There are two basic ways to obtain consent to send individuals marketing communications.

1)    Opt-in requires the active specification of consent from an individual. The user must perform some action to allow a business to use their personal information to send marketing communications. There are many laws that require explicit opt-in, for at least certain types of personal information, such as in Mexico and the UK.

2)    Opt-out automatically puts the individual’s personal information into some type of repository, such as a marketing database, and then requires the individual to take an action to get their personal information removed from that marketing database if they do not want to get such communications. This is often called “implicit consent” or “implied consent” since the individual did not take an action to be put in the database. This is not allowed in many countries outside the U.S.  However, it is the most common practice within the U.S. This said, marketers everywhere should never use the opt-out option whenever there is sensitive personal information, such as financial information or health information, involved.

A basic privacy principle is giving individuals a choice for whether or not to receive marketing communications, and then documenting their consent. However, many times individuals may not even realize their consent is assumed if an implied consent method was used. In many cases this has been found by the FTC to be an unfair and deceptive business practice.

Examples

Let’s look at the wording of some actual examples I found online to help clarify the difference between opt-in and opt-out.  Figures 1 and 2 are examples of opt-in choices.

Figure 1

Optin Ex1

 

 

:

 

 

Figure 2

Optin Ex2

 

 

 

 

What makes these opt-in choices? Because there are no pre-filled choices, and the individual has to purposefully and actively check one of the options to opt-in. This is often called explicit opt-in.

Figures 3 and 4 are examples of opt-out choices.

 

Figure 3

Optout Ex2

 

 

 

Figure 4

Optout Ex1

 

 

 

 

What makes these opt-out choices? 

In Figure 3 the individual was automatically opted-in and must take a specific action to opt-out.

In Figure 4 the wording is misleading, but a common favorite type of choice method for many marketers. Instead to asking which types of communications are wanted (which would be opt-in) the individual is automatically included in all four communications and must take action to opt-out.

 Bottom Line…

Don’t justify putting into place privacy poor practices to meet marketing objectives. You may end up with privacy related fines and penalties, and lost trust and revenues.  Use opt-in whenever possible, and certainly whenever sensitive personal information is involved.

For related information that demonstrates how IBM’s Customer Identity Resolution can be used by organizations to conduct social marketing to existing customers on an opt-in basis, and the details about how they meet the opt-in requirements to support privacy, see http://www-03.ibm.com/press/us/en/pressrelease/43523.wss.

This was cross-posted from the Privacy Professor blog.

Copyright 2010 Respective Author at Infosec Island]]>
Are Connected Cars on a Collision Course with Network Security? https://www.infosecisland.com/blogview/23941-Are-Connected-Cars-on-a-Collision-Course-with-Network-Security.html https://www.infosecisland.com/blogview/23941-Are-Connected-Cars-on-a-Collision-Course-with-Network-Security.html Tue, 26 Aug 2014 09:35:51 -0500 Flipping through any consumer publication that rates vehicles, you’ll see all the metrics you would expect – from safety and performance (acceleration, braking, etc.) to comfort, convenience and fuel economy.

What you won’t find is an assessment of the car’s risk of being remotely hacked. Unfortunately, if you happen to drive a 2014 Jeep Cherokee or 2015 Cadillac Escalade, your vehicle would likely have a one-star review in Consumer Reports for cybersecurity.

These vehicles, along with 22 others with network capabilities, were profiled by researchers Charlie Miller and Chris Valasek during Black Hat 2014 earlier this month. They warned that a malicious attacker could hack into a connected car, doing anything from “enabling a microphone for eavesdropping to turning the steering wheel to disabling the brakes.”

Days later, during the DefCon hacker conference, a group of security researchers calling themselves “I Am The Cavalry” sounded the same alarm, urging the automobile industry to build safer computer systems in vehicles.

The warning comes years after automakers started testing the connected car waters, most notably Ford, as far back as 2010, with its “MyFord Touch” mobile Wi-Fi hotspot. Since then, Google has been in the driver’s seat of the connected car movement. There’s been buzz around Google’s efforts to produce self-driving cars for years, and the smoke signals only grew more prominent after Google moved its head of Android, Andy Rubin, to the robotics division of the company.

While the convenience of connected cars will no doubt increase their popularity, it’s important for manufacturers of all network-ready vehicles to remember the importance of security technology. As we wrote last year about connected cars, attackers don’t care what mobile endpoint they’re hacking – as long as it’s connected to the Internet, it’s a target.

Vehicles: Just One of Many ‘Things’ Hackers Can Target

Although I Am The Cavalry gained recent attention because of its focus on connected vehicles, the hacker coalition has taken a broader approach, by focusing “on issues where computer security intersects public life and human life.”

The group has also advocated for better security over other potential hacker targets, including medical devices, public infrastructure and home electronics. As the growth of the Internet of Things has shown, computer security now intersects public life at nearly every turn!

One proposal put forth by I Am The Cavalry for defending against cyberattacks is the concept of “safety by design” – essentially, that vehicle computer systems are segmented and isolated, so that a problem with one does not impact the performance of another.

Sound familiar? It’s similar to the concept of defense in-depth, which uses redundancy to create a comprehensive, multi-tiered security infrastructure. One of the first steps enterprises should take in building this infrastructure to prevent connected devices from breaching corporate networks is implement a centrally managed VPN.

It doesn’t matter whether you’re using a VPN to secure a connected car, an employee’s phone or tablet, a smart sensor or some other Internet of Things device that relies on machine-to-machine (M2M) communication, the connection needs to be secure before a device accesses the internet or a corporate network and begins transmitting sensitive information.

What’s most important is that our collective ambition to improve technology isn’t surpassed by our ability to keep up with necessary cybersecurity mechanisms. In the case of connected cars, it’s probably best that we all “tap the brakes” and consider the security apparatuses that need to be in place before these next generation vehicles are on every highway in the country.

This was cross-posted from the VPN HAUS blog.

Copyright 2010 Respective Author at Infosec Island]]>
P2PE Versus E2EE https://www.infosecisland.com/blogview/23939-P2PE-Versus-E2EE.html https://www.infosecisland.com/blogview/23939-P2PE-Versus-E2EE.html Mon, 25 Aug 2014 11:44:19 -0500 I have been encountering a lot of organizations that are confused about the difference between the PCI SSC’s point-to-point encryption (P2PE) certified solutions and end-to-end encryption (E2EE).  This is understandable as even those in the PCI community are confused as well.

E2EE is the generic terminology used by the IT industry to describe any solution that encrypts communications from one endpoint to another endpoint.  Key management of the encryption can be done by any party that has an endpoint such as a merchant or a service provider.  Examples of E2EE include IPSec, SSL and TLS.

One of the most common E2EE solutions used by merchants is derived unique key per transaction (DUKPT) also known as “duck putt”.  DUKPT is commonly used in the convenience store and gas station industries to encrypt sensitive authentication data (SAD) from the gas pump to the merchant or processor.  DUKPT uses the 56-bit data encryption standard (DES) encryption or triple DES (3DES) algorithms.  While DES and 3DES 56-bit and 112-bit are no longer considered secure, because DUKPT uses a unique key for every transaction, it means that every transaction has to be individually broken to gain access to the data.  While using the cloud could be leveraged to perform this rapidly, it would be too costly an effort for the data retrieved.  As a result, DUKPT is still considered a secure method of encryption.

P2PE is a subset of E2EE.  This is because the major difference between P2PE and E2EE is that P2PE does not allow the merchant to be a manager of the encryption keys.  Under the P2PE standard, only the transaction processor or other third party is allowed to perform key management.  The merchant is never allowed to perform encryption key management under the P2PE standard.  As a result, DUKPT can be used by both P2PE and E2EE solutions.  However, under P2PE, the key management must be done by a third party, not the merchant.

While third party key management is typically acceptable for small merchants, this does not work for merchants that switch their own transactions to various processors as do mid-sized and large merchants.  That does not mean that E2EE solutions are not acceptable for reducing PCI scope.  As with PA-DSS certified applications, P2PE certified solutions can be accepted by a QSA as long as they are implemented according to the P2PE implementation guide which can reduce the amount of testing a QSA is required to perform.  In my experience, P2PE versus E2EE testing efforts are typically negligible, so any so-called savings are limited at best.

The huge downside to P2PE for merchants is that once you decide on a given P2PE solution, you are pretty much stuck with it and the processor providing it.  That is because most processors offering P2PE are only offering one P2PE solution.  As a result, if a better deal comes along for processing your transactions, you will likely have to replace your terminals and possibly other equipment to switch to the new processor.  For some merchants, that could be a costly proposition and make any switch not worth the effort.

So if your organization is looking at P2PE versus E2EE, I would not necessarily give an advantage to P2PE over E2EE.  Just because an E2EE solution is not P2PE certified does not mean it is not secure.  It only means that the vendor did not believe that the P2PE certification was worth the effort.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
How to Help https://www.infosecisland.com/blogview/23938-How-to-Help.html https://www.infosecisland.com/blogview/23938-How-to-Help.html Mon, 25 Aug 2014 11:17:51 -0500 There are a few movements afoot to help improve security, and the intentions are good. However, to my mind some are just more organized versions of what we already have too much of: pointing out what's wrong, instead of rolling up your sleeves and fixing it.

Here are examples of Pointing Out What's Wrong:

 

  • Scanning for vulnerabilities.
  • Creating exploits.
  • Building tools to find vulnerabilities.
  • Telling everyone how bad security is.
  • Creating detailed descriptions of how to address vulnerabilities (for someone else to do).
  • Creating petitions to ask someone else to fix security.
  • "Notifying" vendors to fix their security.
  • Proving how easy it is to break into something.
  • Issuing reports on the latest attack campaigns.
  • Issuing reports on all the breaches that happened last year.
  • Issuing reports on the malware you found.
  • Issuing reports on how many flaws there are in software you scanned.
  • Giving out a free tool that requires time and expertise to use that most orgs don't have.
  • Performing "incident response," telling the victim exactly who hacked them and how, and then leaving them with a long "to-do" list.


None of this is actually fixing anything. It's simply pointing out to someone else, who bears the brunt of the responsibility, "Hey, there's something bad there, you really should do something about it. Good luck. Oh yeah, here, I got you a shovel."

Now, if you would like to take actual steps to help make things more secure, here are some examples of what you could do:

 

  • Adopt an organization near you. Put in hours of time to make the fixes for them, on their actual systems, that they don't know how to do. Offer to read all their logs for them, on a daily basis, because they don't have anyone who has the time or expertise for that.
  • Fix or rewrite vulnerable software. Offer secure, validated components to replace insecure ones.
  • Help an organization migrate off their vulnerable OSes and software. 
  • Do an inventory of an organization's accounts -- user, system, and privileged accounts -- and lead the project to retire all unneeded accounts. Deal with the crabby sysadmins who don't want to give up their rlogin scripts. Field the calls from unhappy users who don't like the new strong password guidelines. Install and do the training and support on two-factor authentication.
  • Invent a secure operating system. Better yet, go work for the maker of an existing OS and help make it more secure out of the box.
  • Raise money for budget-less security teams to get that firewall you keep telling them they need. Find and hire a good analyst to run it and monitor it for them.
  • Help your local school district move its websites off of WordPress.
  • Host and run backups for organizations that don't have any.


And if you're just about to say, "But that takes time and effort, and it's not my problem," then at least stop pretending that you really want to help. Because actually fixing security is hard, tedious, thankless work, and it doesn't get you a speaker slot at a conference, because you probably won't be allowed to talk about it. Yes, I know you don't have time to help those organizations secure themselves. Neither do they. Naming, shaming and blaming are the easy parts of security -- and they're more about self-indulgence than altruism. Go do something that really fixes something.

This was cross-posted from the Idoneous Security blog.

  Copyright 2010 Respective Author at Infosec Island]]>
Preparing for a Successful IAM Integration Project (Part 2 of 2) https://www.infosecisland.com/blogview/23936-Preparing-for-a-Successful-IAM-Integration-Project-Part-2-of-2.html https://www.infosecisland.com/blogview/23936-Preparing-for-a-Successful-IAM-Integration-Project-Part-2-of-2.html Thu, 21 Aug 2014 13:49:01 -0500 By: Theo Caramihai

Iteration-Based Delivery

As mentioned before, an identity management implementation project will often extend 18-36 months based on the size and complexity of the organization. This is an extraordinary amount of time for any project sponsor to maintain passion around the project. The answer is iterations of value statements which deliver highest values first while bringing the organization updates as to where the remaining items will be coming.

Being human, we love instant satisfaction. In business, many people demand that level of velocity simply because they don’t have time for less.  Creating smart, valuable and achievable iterations of your identity management implementation allows you to be in front of your stakeholders and users more often, delivering their value they asked for or explaining when they will see it while they see value others are currently enjoying.

Don’t forget to include an open and methodical method to create your iterations so one business unit doesn’t feel left out. Often, the emotion of a particularly bothersome manual task overpowers the logical understanding of delivery of a feature which saves the business more money.  You can stem this off by using a simple stack ranking matrix (see Image 1) that allows all business unit stakeholders and project members to vote on what value statements are most important.  Using defined analytics removes the emotions and maintains the team synergy.

Evangelization

If possible, I would have listed this under every single other heading before, during and after your IAM project.  Evangelization is what brings the project home to the users. Before and after each iteration and stage, your evangelization engine should be revving in 5th gear.

As you are ready to start an iteration, revisit with your stakeholders to make sure that you didn’t miss anything before things begin and they remember their expectations. Reviewing your current backlog of remaining items at this time is a great practice. Follow up after the iteration release to see if it met their expectations. Ask what could have been done better; praises will come freely however criticism must often be solicited.  You need the critical opinions to correct your trajectory for future iterations so that you keep you project team engaged for the length of the implementation.

Simply put, keeping your organization knowledgeable and forewarned is the key lynchpin that holds together the rest of your project. Identity cannot be brought in “under the radar” like a new project planning system or CRM application. By it’s very nature, identity is the forefront, presented before every employee, contractor and vendor in your organization.

Resistance can be passive in not accomplishing pre-planning tasks around workflows and access requests. Even worse, it can be overt with a powerful team publicly announcing that they won’t be conforming with the new system.  Either can lead to a snowballing impact strong enough to stall the staunchest sponsors goals.

Yes, everyone knows that communication wins the game.  The question remains: are you prepared for that level of communication before you start?

Until the next time,

Hammer

 

requirements-ranking.png

Image 1 - Requirements/Use Case Ranking Matrix

You can download this template here: http://preview.tinyurl.com/lf4kmlr

This was cross-posted from the Identropy blog.

Copyright 2010 Respective Author at Infosec Island]]>
NERC CIPS and Keeping Lights On – Are They the Same? https://www.infosecisland.com/blogview/23935-NERC-CIPS-and-Keeping-Lights-On--Are-They-the-Same.html https://www.infosecisland.com/blogview/23935-NERC-CIPS-and-Keeping-Lights-On--Are-They-the-Same.html Thu, 21 Aug 2014 13:38:35 -0500 August 19th, I spent a day with the NERC Critical Infrastructure Protection (CIP) Version 5 Drafting team working on one of the NERC CIP Standards. The focus was on boundary protection, not on the actual control system devices and serial communications which were explicitly excluded.

The vulnerabilities that could lead to major equipment damage and associated extended outages because of design features in the control system devices such as Stuxnet, system vulnerabilities such as Aurora, or measurement vulnerabilities such as serial HART communications were not addressed. Rather, the focus was on the traditional network issues – firewalls, routers, etc.

Given the recent spate of IT hacks that have managed to make it through existing boundary protection, isn’t this thinking a bit antiquated? About the only discussion on actual control systems or facility operation came from the FERC representative not the utility attendees. The utilities’ and NERC’s concerns were how to minimize the number and activities needed to address the “Lows” (smaller facilities). There just doesn’t seem to be an appreciation of what a determined, knowledgeable attacker would attack. There also doesn’t appear to be an appreciation of just how common the equipment and the associated cyber vulnerabilities are across multiple facilities. That is, there does not seem to be an appreciation of just how many “Lows” could be compromised that could impact large portions of the bulk electric grid for a substantial period of time.

(Warning- major sarcasm) In order for the NERC CIP approach to be successful, NERC needs to hold a training session for the hackers on what the NERC ground rules are for their attacks – what is in scope for attacks and when. The hacker training should assure them that the utilities’ and NERC’s paper approach on Aurora is adequate and so they should not attempt to use that scenario. It should also convince them not to use available ICS metasploits because they are out of scope for NERC CIP mitigation.

Is there a question as to whether the lights will stay on?

This was cross-posted from the Unfettered blog.

Copyright 2010 Respective Author at Infosec Island]]>
Save $300 on ICS Cyber Security Conference Registration https://www.infosecisland.com/blogview/23934-Save-300-on-ICS-Cyber-Security-Conference-Registration.html https://www.infosecisland.com/blogview/23934-Save-300-on-ICS-Cyber-Security-Conference-Registration.html Thu, 21 Aug 2014 10:35:59 -0500 2014 ICS Cyber Security Conference Discount Code

Atlanta Oct. 20-23, 2014 - Georgia Tech Hotel and Conference Center

Following a sold out event in 2013, the 2014 ICS Cyber Security Conference is expected to attract more than 250 professionals from around the world and again sell out.

Attendees who register by Friday, August 22 will Save $300 and pay just $1695 for a full conference registration which includes 4 days and pre-conference workshops.

Since 2002, the ICS Cyber Security Conference has gathered ICS cyber security stakeholders across various industries and attracts operations and control engineers, IT, government, vendors and academics.

As the longest-running cyber security-focused conference for the industrial control systems sector, the event will cater to the energy, utility, chemical, transportation, manufacturing, and other industrial and critical infrastructure organizations.

The conference will address the myriad cyber threats facing operators of ICS around the world, and will address topics covering ICSs, including protection for SCADA systems, plant control systems, engineering workstations, substation equipment, programmable logic controllers (PLCs), and other field control system devices.

The 14th ICS Cyber Security Conference will have 5 major themes:

• Actual ICS cyber incidents

• ICS cyber security standards

• ICS cyber security solutions

• ICS cyber security demonstrations

• ICS policy issues

The majority of conference attendees are control systems users, working as control engineers, in operations management or in IT. Industries represented include defense, power generation, transmission and distribution, water utilities, chemicals, oil and gas, pipelines, data centers, medical devices, and more. Other attendees work for control systems vendors, security products and services companies, associations, universities and various branches of the US and foreign governments.

http://www.icscybersecurityconference.com/#!register/c8g7

Representatives from these organizations and many more have attended the ICS Cyber Security conference in the past:

ICS Conference Attendees

Confirm Your Spot and Register Today!

About the Conference

The ICS Cyber Security Conference is the conference where ICS users, ICS vendors, system security providers and government representatives meet to discuss the latest cyber-incidents, analyze their causes and cooperate on solutions. Since its first edition in 2002, the conference has attracted a continually rising interest as both the stakes of critical infrastructure protection and the distinctiveness of securing ICSs become increasingly apparent.

Copyright 2010 Respective Author at Infosec Island]]>
Preparing for a Successful IAM Integration Project (Part 1 of 2) https://www.infosecisland.com/blogview/23932-Preparing-for-a-Successful-IAM-Integration-Project-Part-1-of-2.html https://www.infosecisland.com/blogview/23932-Preparing-for-a-Successful-IAM-Integration-Project-Part-1-of-2.html Wed, 20 Aug 2014 16:23:22 -0500 By: Theo Caramihai 

If you have ever hired a Professional Services team to do an integration project, you know that it takes planning and tenacity to pull it through to the end. Depending on the breadth of the integration, the difficulty of accomplishing this varies. So, what makes an Identity Management integration project so special that I would take time to write about it and think you may deign to read it?

The answer is simple; the level of involvement your business units will be required to give and the complete understanding of how their business processes operate in reality at the person level.  In my experience implementing ERPs, global environments, SaaS products and IAM projects, these all share the attribute of business unit involvement, however not the amount which is required to make your identity or access management and governance program successful.

With other projects, business unit (BU) involvement is limited to perhaps one or two stakeholders from key areas who can help gather requirements, define goals and steer the project.  After the initial kickoff, these team members spend little of their time involved with either the consultants or the project proper until user-acceptance testing and production release.

Not so when undertaking an identity or access governance project.  Identity and access touches not just core stakeholders and users, but the very processes, workflows and ways people interact, with each other and the corporate technology stack, to complete their daily jobs. This difference cannot be understated nor treated in a cursory fashion.

Many times, project sponsors underestimate the complexity of processes outside of their vertical. Other times, internal fiefdoms and politics can delay or even stop a progression; very often for very menial and solvable reasons.  Most often, the business unit stakeholders did not have a clear understanding of what is involved when moving to a formalized identity and access management program. Finally, identity projects tend to be longer in duration, keeping organizational momentum is a Herculean task.

So, if this is true, how can any identity project be considered more than a mid-grade success?  As with everything, it starts with great planning. However the planning is subtly different. Sure, it shares the same buzzwords that can be used anywhere yet it is the substance of those buzzwords which is the making or breaking of a successful IAM implementation.

 

Here are some of the key things that I’ve learned that should consider before launching your identity or access governance project:

Socialization

The simple fact of any new initiative is if the employee cannot see the value to their own day they have a hard time buying into the vision.  Meeting not just with executive stakeholders from your partnering business units but also their key contributors within the unit will give a project sponsor an understanding and empathy for the unit’s perspectives and pains; not to mention motivations.  Without this intimate knowledge and feeling for that unit’s goals and concerns, the project sponsor is sure to meet resistance during iterations when those needs aren’t met.

Conceptualization

Once you have done your internal due diligence through socialization, the next step is conceptualization and making it real to your stakeholder and project teams.  This is where the real work begins.  Our CTO and co-founder, Ash Motiwala, has a great series of articles on defining your identity management roadmap and steps to help prepare when scoping your IAM project.

As you go through your conceptualization tasks, keep in mind the feedback which you learned through socialization and how that may be included with your selected identity or access governance system.  Sometimes you will find that in order to accomplish both the needs of the project and the business unit you must find some compromise in a change to the existing business process or customizations of product at additional costs.

It is also here you will learn of your risks and pitfalls. In understanding your HR onboarding and birthright provisioning, you may uncover the fact that you have tens of thousands of stagnant user and vendor accounts that should be imported to the new IAM system.  If plans on handling situations like that aren’t created before the project begins, they could imperil delivery dates later.

Finally, one of the most ignored items during conceptualization is what will be needed operationally to complete your project.  Do you have a development and pre-production environment for use? With real-life test data in them?  If not, plan for extended production testing, bugs and poor user reception since they’ll see errors when they begin using things.  Do you have your team calendars synchronized and leveled to your goal delivery dates?  Without leveling, there are ghost hours not accounted for and time which will extend the project dates.  Are your data centers ready? Are your firewalls and federations in a state to communicate?  All these details should be considered before your kickoff meeting.

This was cross-posted from the Identropy blog.

Copyright 2010 Respective Author at Infosec Island]]>
Vulnerability Management: Just Turn It Off! PART III https://www.infosecisland.com/blogview/23931-Vulnerability-Management-Just-Turn-It-Off-PART-III.html https://www.infosecisland.com/blogview/23931-Vulnerability-Management-Just-Turn-It-Off-PART-III.html Wed, 20 Aug 2014 11:28:21 -0500 By: Cindy Valladares

Our previous posts in the ‘Just Turn It Off!’ series (Part I and Part II) explained many commonly overlooked features than can unintentionally weaken your network’s security.

We discussed the risks of an unsecured VNC, rlogin, HTTP TRACE and various other features, that fortunately, have a fairly simple fix.

In our third and final post of this series, Tripwire’s Vulnerability and Exposure Research Team (VERT) highlights four more unnecessary risks that often appear in even the most secure networks.

Our researchers give step-by-step instructions on how to immediately address these considerable risks that can be hurting the security of our environment.

 

1. LEGACY REMOTE DESKTOP PROTOCOL

By Craig Young, Security Researcher

Microsoft’s Remote Desktop Protocol (RDP) is an invaluable tool in both corporate and home networks. It’s lightweight enough to work with limited bandwidth, while at the same time robust enough to provide bells and whistles like remote file and device sharing.

In the corporate world, it shows its worth by providing a graphical interface to Microsoft servers that may be inaccessible or ‘headless.’ On the home network, especially with Windows Home Server, it provides a low-cost, easy to use and relatively secure means of remote access. The relative level of security is what we’ll explore here.

Over the last year, three critical vulnerabilities have been announced surrounding the remote desktop protocol, including one that is wormable. Two of the three (including the wormable CVE-2012-0002) can only be exploited without authentication when the remote desktop service is running in the less secure legacy mode.

With network level authentication (NLA) enabled, successful exploitation first requires successful authentication, thereby decimating the threat of automated attack tools. This option was released in 2006 to enhance security by requiring authentication credentials before the server initializes a full remote desktop connection.

Enabling NLA greatly reduces attack surface by limiting the functionality exposed to unauthenticated users. In other words, NLA can mitigate vulnerabilities before patches and IPS signatures exist.

Here are a few key points that can go a long way toward getting the most out of RDP while maintaining good security practices:

  • Enabling inbound RDP connections on Windows XP or Windows 2003 should be avoided
  • Remote Desktop with Network Level Authentication is strongly encouraged on supported platforms
  • RDP access should always be limited to trusted local hosts and authenticated VPN users

Tripwire IP360 and Tripwire PureCloud customers can detect the absence of NLA on a system by watching for the “Legacy Mode Remote Desktop Protocol” vulnerability in reports.

There really is no good reason to leave legacy mode RDP enabled, so for crying out loud, JUST TURN IT OFF!

 

2. SNMP DEFAULT COMMUNITY STRINGS

By Darlene Hibbs, Security Researcher

When not properly secured, Simple Network Management Protocol (SNMP) can make it simple for attackers to obtain useful information about devices on your network and possibly even reconfigure them.

SNMP can run on anything from routers and switches to servers and printers and is often enabled by default with community strings: ‘public’ for read access and ‘private’ for write/management access.

The community string acts as the password for SNMP communication, so it is important to set complex and hard-to-guess community strings.

Tripwire IP360 and Tripwire PureCloud customers can look for various ‘Weak SNMP Community String ‘COMMUNITYSTRINGNAME‘ Found’ vulnerabilities, which will identify many common and easily guessed community strings beyond just ‘public’ and ‘private’.

Confirming SNMP public community string:

The command line tool snmpget provides a quick and easy way to check for community strings on devices running SNMP:

snmpget -v1 -c public host sysDescr.0

For example:

[root@dhcp-218-195 ~]# snmpget -v1 -c public 192.168.218.70 sysDescr.0 SNMPv2-MIB::sysDescr.0 = STRING: Cisco Internetwork Operating System Software IOS ™ C2950 Software (C2950-I6Q4L2-M), Version 12.1(20)EA1a, RELEASE SOFTWARE (fc1) Copyright (c) 1986-2004 by cisco Systems, Inc. Compiled Mon 19-Apr-04 20:58 by yenanh

If the community string doesn’t exist, you will instead get:

[root@dhcp-218-195 ~]# snmpget -v 1 -c private 192.168.218.195 sysDescr.0 Timeout: No Response from 192.168.218.195.

Changing Community Strings:

As SNMP can run on a variety of systems, it may be necessary to consult the product documentation to configure or disable SNMP.

If SNMP is not necessary, it should be disabled whenever possible. If it is necessary, SNMP v3 should be used whenever possible.

On Windows:

1) In the Start Menu, click Run and type, or type in the search box: services.msc
2) Locate the SNMP Service
3) Right click on SNMP Service and click Properties
4) Go to the Security tab and check the list of Accepted community names
5) Remove any public, private or other easily guessed community strings and replace with complex community strings

On Linux:

1) open snmpd.conf in a line editor, usually /etc/snmp/snmpd.conf
2) look for lines such as:

com2sec notConfigUser  default       public

3) Comment out any lines containing public, private or other easily guessed community strings or replace with complex community strings

SNMP default community strings? JUST TURN IT OFF!

 

3. OPEN/GUEST ACCESS TO SMB SHARES

By Matthew Condren, Security Researcher

SMB shares are often enabled with little to no security protecting them, allowing them to be accessed by unprivileged users. Tripwire IP360 and Tripwire PureCloud customers can ascertain the status of their shares by looking for the following vulnerabilities in their scan report:

  • An SMB share permits anonymous read access
  • An SMB share permits anonymous write access
  • An SMB share permits anonymous full control access
  • The Guest account has permission to read from an SMB share
  • The Guest account has permission to write data to an SMB share

Confirming The Status Of Your Shares

The permissions on your shares can be determined using smbclient. For example, let’s assume that you have a share named ‘Share’, and you attempt to connect to it using smbclient:

# smbclient \\\\192.168.xxx.xxx\Share -U Guest% Domain=[2K3SRVRD86] OS=[Windows Server 2003 Service Pack 2] Server=[Windows Server 2003 5.2] Smb: \>

As demonstrated above, we gained access to the share using username ‘Guest’ with a blank password. You can now execute commands in order to test the level of access you have to the share.

Securing Your Shares

You can modify the permissions on an SMB share to restrict access to certain users, as well as restricting the level of access each has, such as read, write or full control.

Windows 2003 and earlier

1) Right-click on your share and choose ‘properties’ from the context menu
2) Choose the ‘sharing’ tab from the resulting dialog
3) Click the ‘Permissions’ button
4) Use the ‘Add’ and ‘Remove’ buttons to specify the privileged users
5) Choose the appropriate Allow/Deny checkboxes for each user or user group

Windows 7/2008/Vista

1) Right-click on your share and choose ‘properties’ from the context menu
2) Choose the ‘Sharing’ tab
3) Click on the ‘Share’ button
4) Add the desired users to the list by choosing then from the drop-down and clicking ‘Add’
5) Use the permission level drop-down beside each user to set permission level

Open/guest access to SMB shares? JUST TURN IT OFF!

 

4. TELNET

By Tyler Reguly, Technical Manager, Security Research & Development

You are probably wondering why we are discussing telnet in 2014. Short answer: It’s still out there. Slightly longer answer: people run older operating systems, which are systems that still shipped with telnet enabled by default.

Tripwire IP360 and Tripwire PureCloud customers can detect the presence of telnet on a system by watching for the “Telnet Available” vulnerability in reports.

Confirming Telnet

Telnet can be easily confirmed using the telnet command on most major operating systems. The command is simply ‘telnet ’ to see if you connect. Using ncat in this situation will lead to unexpected data:

neogeo:Downloads treguly$ ncat aix53 23 ??%??

Ncat has the –t option, which will allow it to negotiate telnet options (represented as ??%?? above). The output at this point will appear closer to that of telnet but with the ? and % characters still visible.

You can confirm telnet locally on RHEL5, using ‘chkconfig –list’ to find the line that reads ‘telnet: on’.

Disabling Telnet

First, let’s add the caveat that you should only disable telnet if you have another way of accessing the system. The goal here is not to render your systems inaccessible.

RHEL5 Telnet

1) Browse to /etc/xinet.d and locate the telnet file
2) Update the line ‘disabled = no’ to read ‘disabled = yes’
3) Restart xinetd (/etc/rc.d/xinetd restart)

AIX 6 Telnet

1) Browse to /etc and locate the inetd.conf file
2) Update the line that starts with ‘telnet stream’ to read ‘#telnet stream’
3) Restart inetd (refresh -s inetd)

Running telnet? JUST TURN IT OFF!

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Hackers Exploited Heartbleed Bug to Steal Patient Data from Community Health Systems https://www.infosecisland.com/blogview/23930-Hackers-Exploited-Heartbleed-Bug-to-Steal-Patient-Data-from-Community-Health-Systems.html https://www.infosecisland.com/blogview/23930-Hackers-Exploited-Heartbleed-Bug-to-Steal-Patient-Data-from-Community-Health-Systems.html Tue, 19 Aug 2014 22:40:00 -0500 (SecurityWeek) - Earlier this week, Community Health Systems, one of the largest hospital operators in the United States, announced that hackers managed to steal the records of 4.5 million patients.

FireEye-owned Mandiant, known for investigating high-profile breaches, was hired to investigate the incident and believes the attack was the work of a Chinese advanced persistent threat (APT) group.

While no technical details of the attack had previously been disclosed, information security firm TrustedSec, citing sources familiar with the incident, said on Tuesday that the initial attack vector was through the infamous “Heartbleed” vulnerability in OpenSSL, which provided the attackers a way in, eventually resulting in the compromise of patient data.

Analysis by SecurityWeek shows the claims made by TrustedSec match up well to previously shared details from an attack that SecurityWeek reported on earlier this year, which leveraged the Heartbleed bug to bypass two-factor authentication and hijack user sessions.

Read the Full Story at SecurityWeek

 

 

Copyright 2010 Respective Author at Infosec Island]]>
‘BadUSB’ Malware Leaves Terrible Taste at Black Hat 2014 https://www.infosecisland.com/blogview/23929-BadUSB-Malware-Leaves-Terrible-Taste-at-Black-Hat-2014.html https://www.infosecisland.com/blogview/23929-BadUSB-Malware-Leaves-Terrible-Taste-at-Black-Hat-2014.html Tue, 19 Aug 2014 16:35:49 -0500 If awards were given out at Black Hat 2014, one nominee for “Exploit of the Conference” would have won in a runaway – the “BadUSB” exploit.

Researchers Karsten Nohl and Jakob Lell caused quite a stirin Las Vegas earlier this month, which quickly spread to the rest of the world of cybersecurity, when they showed how USB drives could be reprogrammed and transformed into portable malware carriers.

Nohl and Lell explained that since USB drives are designed to be reprogrammable, a hacker could make a drive masquerade as another device. In one example an attacker could reprogram a USB device to assume the function of a keyboard, and then issue commands to the computer or install malware.

And possibly the worst part of the vulnerability is that a user has no visibility into the software running a USB drive, so there’s no way to find out if their drive has been affected. In the wrong hands, a BadUSB drive really is “scarily insecure,” as Nohl put it.

USB Drives are Repeat Cybersecurity Offenders

Long before Black Hat 2014, it’s been widely known that USB drives are not the most secure way to transfer data between devices. Convenient, yes. Secure, no.

Not only are USB drives easy to lose, but any device with a USB interface could potentially be affected by malware originating from a USB drive, including laptops and phones. As far back as July 2011, the Ponemon Institute found that 70 percent of businesses could trace data breaches back to USB drives.

Even the NSA found USB drives to be useful for espionage purposes. In December 2013, it was revealed that the agency had used a series of USB implants known as “COTTONMOUTH” to target adversarial networks. If the NSA is exploiting a vulnerability, then it’s probably an effective means of attack.

A World Without USB Drives?

Even if businesses understand the risk of using USB drives, they’re usually limited to making an all-or-nothing choice. In fact, in the Ponemon survey, more than one-third of enterprises said they used software to block all usage of USB drives by employees. Other complementary solutions like antivirus software also won’t fend of exploits like BadUSB because the software that runs on USB drives isn’t visible to computers. It’s clear that USB drives are a threat, so surely, a smarter approach would be to remove the need for employees to use them altogether.

If businesses want to allow their employees to work remotely, it’s better they require them to access and transfer files using a device that is connected securely to the corporate network via a VPN, instead of allowing them to use a USB drive to move data from one device to another. As soon as a USB drive is ejected from a corporate device, the information it contains is no longer protected by the umbrella of security offered by the corporate network, and enterprises no longer have control over who has access to the data or how the data is utilized.

If an enterprise utilizes a centrally managed VPN, employees can download a VPN client that will work on any device or operating system, which they can use to access files anywhere, at any time. An enterprise will also maintain access control, limiting the information users can access according to their roles and attributes. Additionally, if a user’s computer were to be affected by malware, the network administrator could deprovision the user as soon as the breach was detected, thereby preventing the malware from spreading throughout the network.

Now that Nohl and Lell have sounded the alarm about BadUSB, the hope is that enterprises will stop using USB drives and instead turn toward comprehensive network security and a defense in-depth strategy, including utilizing a VPN with central management. Hopefully, by Black Hat 2015, BadUSB will be just a distant memory.

This was cross-posted from the VPN Haus blog.

Copyright 2010 Respective Author at Infosec Island]]>
More Dot-Gov Sites Found Compromised https://www.infosecisland.com/blogview/23928-More-Dot-Gov-Sites-Found-Compromised.html https://www.infosecisland.com/blogview/23928-More-Dot-Gov-Sites-Found-Compromised.html Tue, 19 Aug 2014 12:31:59 -0500 By: Jovi Umawing 

It has been a while since we pushed out “A .Gov Media Player? Not Exactly…”, a blog post about arcadia-fl[dot]gov at the time of its compromise and serving a binary file, and“Philippine Government Site Infected with Spam Code”, which is aboutda[dot]got[dot]ph pages that is found to contain hidden Blackhat SEO spam links. Recently, we’ve noticed a number of .gov URLs that were broken into to host different pages.

Our first domain, one from Taiwan, has served a “Hacked by…” page which we normally see hackers put up to show that they’ve “owned” it.

click to enlarge

Doing a quick search of the email address leads to other compromises done by this particular hacker. The image above the big, green text is actually a Flash media file hosted on a Russian server that is also known to serve malware.

We also found that a lot of pages hosted on this domain have spammy content revolving around Viagra, gambling, and student loans, among other else. For the complete list of spammy pages, here‘s the scan result page from Unmasked Parasites.

For our second domain, a Ukrainian government site, has served a Google Drivephishing page.

Looks phishy.click to enlarge

Both pages are now down.

With the number of .gov sites we have seen that are insecure, it pays for users to be careful of potential risks they may encounter when visiting them. As they remain vigilant with this, so, too, should admins with hardening site security and keeping pages free from spammy, phishy, and malicious content.

This was cross-posted from the Malwarebytes blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Is EMET Dead? https://www.infosecisland.com/blogview/23925-Is-EMET-Dead.html https://www.infosecisland.com/blogview/23925-Is-EMET-Dead.html Mon, 18 Aug 2014 12:41:00 -0500 By: Craig Young

Exploit mitigation techniques have come a long way. In the 90s, any stack overflow was trivial to exploit for arbitrary code execution but over time, the protections have expanded.

We now have DEP to prevent execution of user-writable data and ASLR to randomize the addresses space, making it harder to predict where a payload or a library would exist in memory. These protection tactics, of course, led to more creativity among exploit developers, including the completely brilliant idea of return-oriented-programming (ROP), in which an attack payload chains together small sequences of instructions already present in predictable locations.

This exploitation technique turns basic computer functionality completely on its head by using the stack pointer the way a CPU would normally use the instruction pointer. These short instruction sequences, referred to as ‘ROP gadgets,’ are chained together to undermine modern exploit prevention techniques.

This has become the standard approach for attacking hardened systems and has led to a plethora of technology dedicated to recognizing ROP exploitation and preventing system compromise. Microsoft’s EMET, while not perfect, is a great example of this.

The idea behind preventing ROP exploitation is to enforce control flow integrity (CFI) policies. Daniel Lehmann and Ahmad-Reza Sadeghi gave an excellent overview of these CFI policies in their Black Hat 2014 talk, ‘The Beast is in Your Memory: Return-Oriented Programming Attacks Against Modern Control-Flow Integrity Protection Techniques.’ As they explained, the most popular methods for CFI involve return instruction restrictions (i.e. you must return to code, which follows a CALL instruction) and behavioral analysis (i.e. there should not be too many short instruction sequences in a row as implemented in kbouncer and roppecker).

While these CFI policies go a long way towards recognizing unexpected execution flow, the research presented at Black Hat demonstrates that it is, in fact, possible for exploit developers to work around these exploit mitigations. Using kernel32.dll as an example, their research showed that it is possible to get enough gadgets for a Turing complete set of ROP instructions compliant with these main CFI policies.

Two techniques led to this achievement. First, locate all ROP gadgets that follow CALL instructions, and then introduce a ‘ROP NOP’ gadget to defeat the behavioral analysis. A ROP NOP gadget contains 20+ instructions and is inserted between every set of 7 short ROP to defeat the behavioral analysis. The trick here is that the long sequences must be carefully selected to avoid altering registers used in subsequent gadgets.

Lehmann and Sadeghi created tools to analyze binaries and locate appropriate gadgets. Based on the small size of kernel32.dll, and the fact that it is included in essentially all Windows applications, the researchers surmise that most any sufficiently complex codebase should contain a turing complete ROP gadget set.

As proof, they demonstrated an existing exploit being blocked by Microsoft’s EMET, and then used their rewritten exploit, which successfully spawned Windows Calculator (the ‘Hello World’ of exploit payloads). While this is not the first demonstration of EMET bypasses and certainly will not be the last, this research should not deter users from using tools like EMET, kbouncer, etc., as they do still raise the bar for exploit developers.

Fortunately, we have yet to start seeing EMET bypasses in the wild and Tripwire VERT encourages its use as a great free tool for hardening Windows based systems.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Getting in Our Own Way https://www.infosecisland.com/blogview/23923-Getting-in-Our-Own-Way.html https://www.infosecisland.com/blogview/23923-Getting-in-Our-Own-Way.html Mon, 18 Aug 2014 12:30:19 -0500 The security community has this widely-understood reputation for self-destruction. This is not to say that other communities of professionals don't have this issue, but I don't know if the negative impact potential is as great. Clearly I'm not an expert in all fields, so I'll just call this a hunch based on unscientific gut feeling.

What I do see, though, much like with the efforts of the "I am the Cavalry" movement which has sent an open letter via Change.org to the auto industry, is resentment and dissent without much backing. In an industry which still has more questions than answers - and it gets worse every day - when someone stands up with a possible effort towards pushing a solution you quickly become a lightning rod for nay-sayers. Why is that?

One of my colleagues who is the veteran CISO has a potential answer - which for the record I'm uncomfortable with. He surmises that the collective "we"(as in security community) aren't actually interested in solving problems because the real solutions require "soft skills like personality" and business savvy in addition to technical accumen. It turns out that taking the time to understand the problem, and attempt to solve it (or at least move the ball forward) is very hard. With the plethora of security problems in nearly everything that has electricity flowing to it, it's near-trivial to find bugs. Some of these bugs are severe, some of them are the same 'ol, same 'ol SQL injection and buffer overflows which we identified over a decade ago but still haven't solved. So finding problems isn't rocket science - actually presenting real, workable solutions is the trick. This is just my humble opinion based on my time in the enterprise and consulting in.

I once worked for a CISO who told his team that he didn't want to hear about more problems until we had a proposed solution. Furthermore, I'm all for constructive criticism to help contribute to the solution - but don't attack the person or the proposed solution just to do it. Don't be that person.

I think it may have been Jeff Moss that I heard say it - "Put up or shut up"... so give me your solution idea, or stop whining things are broken.

This was cross-posted from the Following the Wh1t3 Rabbit blog.

Copyright 2010 Respective Author at Infosec Island]]>
2014 ICS Cyber Security Conference Agenda Update https://www.infosecisland.com/blogview/23922-2014-ICS-Cyber-Security-Conference-Agenda-Update.html https://www.infosecisland.com/blogview/23922-2014-ICS-Cyber-Security-Conference-Agenda-Update.html Fri, 15 Aug 2014 08:57:00 -0500 Our team is busy putting together the best ICS Cyber Security Conference to date. As always, the conference will address real world problems and discuss actual ICS cyber incidents, many of which have never been told before.

 

The 14th ICS Cyber Security Conference  will have 5 major themes: Actual ICS cyber incidents; ICS cyber security standards; ICS cyber security solutions; ICS cyber security demonstrations; and ICS policy issues.

ICS Cyber Security Conference

The Conference focuses on what has REALLY happened and what is being done that affects the CONTROL SYSTEMS.

 

While we sift through the many great speaker submissions and build the agenda, we can share a bit about some select sessions that we have planned, including:

 

  • - A case history of a very significant control system cyber incident and what has happened since. A broadcast storm resulted in complete and simultaneous failure of two interconnected power plant units (over 200 DCS processors with complete loss of logic with the plants at power). The discussion will provide details of the utility’s response to the incident including improving the robustness of the upgraded processor firmware and hardening its network against overloads or broadcast storms.
  •  
  • - A real case history of a recent cyber attack of an off-shore oil platform. The presentation will discuss how big data was used to identify a cyber attack that caused the tilting and resultant shutdown of the platform.

  •  

  • - Details of a vulnerability that may actually be more significant than Stuxnet as it affects any controller and may not be detectable. It is possible to sniff and inject packets into field device networks such as Modbus over RS-485, HART, Profibus, etc. Devices and applications residing on this control network can be vulnerable to specially crafted packets and instructions (the developers didn’t expect that packets could have correct CRC and incorrect content.)Moreover, some of the data that is collected at the field device level is passed to the higher levels. This “feature” can be used to attack not only the lower layers of network and/or industrial processes, but also corporate networks. Imagine hacking one small transmitter to gain remote command execution on the SAP system.

  •  

  • - Aurora is still not well understood and affects every electric substation and substation customer. This presentation will include a detailed discussion of what is Aurora, why it is a gap in protection, and what can it affect. It will also discuss the first sets of Aurora hardware mitigation data from two utilities.

  •  

  • - There is minimal guidance on how to identify the potential consequence from cyber vulnerability disclosures. An end-user control system cyber security expert will provide a general methodology for determining the potential consequence of vulnerabilities. That is, what you have to do and when.

  •  

  • - A utility has been acting as a test bed for evaluating control system cyber security solutions for reliability. The utility is monitoring their control system network and using this information to improve reliability and reduce maintenance costs. The utility will provide a status of the efforts including the close integration of IT, OT, and Operations.

 

  • Recent studies such as the Unisys Ponemon report have attempted to indicate the state of critical infrastructure security without significant input from the ICS community. Consequently, the results and conclusions may be suspect. This presentation and associated survey will be the start of an assessment of the state of ICS cyber security based on input from the ICS community.

 

  • Cyber insurance is becoming an important consideration in IT. However, providing cyber insurance to the ICS community where business continuity and personal safety are critical is a more difficult problem. A major international insurance carrier will provide their perspectives on the carrot and stick approach necessary to provide cyber insurance for ICS operators.

 

As with previous ICS Cyber Security Conferences, the agenda will not be complete until shortly before the conference to accommodate the most current issues and findings.

 

Much More to Come! This Event Sold out Last Year,  Register Now and Hold Your Spot.

Copyright 2010 Respective Author at Infosec Island]]>
Requirement 10.6.2 Clarification https://www.infosecisland.com/blogview/23921-Requirement-1062-Clarification.html https://www.infosecisland.com/blogview/23921-Requirement-1062-Clarification.html Thu, 14 Aug 2014 14:00:50 -0500 As a refresher, requirement 10.6.2 states:

“Review logs of all other system components periodically based on the organization’s policies and risk management strategy, as determined by the organization’s annual risk assessment.”

The argument in PCI circles is the definition of “all other systems”.  Some of us believed that it meant systems other than those in-scope.  Other people believed that it had to refer to only in-scope systems such as a user workstation.  As a result, I asked the PCI SSC to clarify this requirement and this is the response I got back.

“In PCI DSS v2.0, logs for all in-scope systems were required to be reviewed daily. However it was recognized that for larger or more complex environments, there could be lower risk systems that were in scope for PCI DSS that could warrant less frequent log reviews. As such, PCI DSS v3.0 defines a number of events and system types that require daily log reviews, and allows the organization to determine the log review frequency for all other in-scope events and systems that do not fall into those categories.

For some environments, such as those designed specifically for the purposes of PCI DSS, then it is possible that all in-scope systems fall under the system categories defined in Requirement 10.6.1, meaning that daily log reviews are required for all in-scope systems. In other environments, there may be many different types of system that are considered in-scope, but which are not critical systems and neither store, process or transmit CHD nor provide security services to the CDE. Some possible examples could be stock- control or inventory-control systems, print servers (assuming there is no printing of CHD) or certain types of workstations. For these events or systems, the entity, as part of its annual risk assessment process, is expected to define the frequency for reviews based on the risk to its specific environment.

The intent of this update is not to apply PCI DSS Requirements to out-of-scope systems. We realize that the current wording is causing confusion and will address this in the next revision.”

So there we have it.  Not the first time my interpretation was wrong.  The requirement means in-scope systems that, from an assessment of risk, are at less of a risk of compromise can reduce the frequency of log reviews.

But that means you need to have an accurate risk assessment to support your argument.  So those of you that have not explicitly assessed the risk of your category 2 systems will have to break them out to support a reduced log review frequency.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>