Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 An Open Letter to Executives Thu, 17 Apr 2014 10:38:29 -0500 I apologize for not posting anything recently, but I have been busy dealing with my taxes, QSA re-certification and clients.  Over the years that has involved dealing with people that I would like to think know better.  But based on my interactions with them, it is painfully obvious that they do not.  As a result, I have decided to write this letter to all of you in hopes that you get a clue as to how your short sidedness is going to ultimately sell your organization “down the river”.  I should have published this letter a long time ago as this is not a new issue.


 Dear Executive:

As I sat in the meeting, I watched your body language as I delivered our report on how well your organization is secured.  Based on my observations, it is painfully obvious that you do not have a clue as to the importance of security as well as you really do not care.  Since I want my bill paid, I was polite and did not take you to task as you should be taken.

So, let me put this into blunt language that you might better understand.

First and foremost, as an executive of the organization, you have a fiduciary responsibility to protect the assets of the organization.  Based on our findings, you are not protecting those assets, you are not even close.  I realize that all of this technology baffles you, but it is that technology where your organization’s life blood of intellectual property resides in orders, formulas, blueprints, specifications, customer lists and other key or sensitive information.  Without that intellectual property, your organization does not exist.  Yet as we went through all of our findings, you argued time and again about what it will take in time, money and/or manpower to appropriately secure your organization.  While I appreciate your concerns, this is what it takes to secure an organization that relies heavily on technology.

Second, security is not perfect.  I am not exactly sure where you got the impression that security is perfect, but that is wrong and you need to adjust your thinking.  Security is all about managing and minimizing risks.  As an executive, that is one of your primary job functions.  Yet your three/five/seven/ten year old risk assessment seems to point to the fact that risks and managing those risks are not a priority.  As if that was not enough, we pointed out a number of areas where risk exists but there is no evidence that the management of those risks was being done.  The recommendations we provided you offered a number of viable solutions, however they will all require changes to the organization, which seemed to be your biggest reason as to why our recommendations could not be implemented.

Third, doing the bare minimum is not going to secure your organization.  While we were talking about the PCI DSS, any security framework is merely the ante into the security game.  If you truly want to be secure it will take significant time and a certain amount of money to make that happen.  Buying security appliances and other “widgets” can only do so much.  One of the biggest findings in our report is that your existing tools in use are not being used properly and warnings and alerts are being written off as “false positives” without any investigation.  With the level of sophistication of attacks rising exponentially, based on our assessment. those tools are doing very little to protect your organization.  Another area of great concern is that your employees are, for the most part, unable to recognize current scams and threats.  As you correctly pointed out, security awareness training is not going to stop every attack, but what you missed is that such training should significantly reduce such attacks’ effectiveness.

Fourth, you need to read the definition of “compliance”.  As defined in Merriam-Webster’s dictionary, compliance means, “conformity in fulfilling official requirements”.  As our findings pointed out, you are not in compliance with a number of key “official requirements” defined by the PCI DSS.  Without adequate “official requirements” such as policies, standards and procedures, how do your employees know their responsibilities and what you are holding them accountable?  Based on our discussion of findings, you apparently are of the opinion that your employees should just intuitively know their responsibilities and accountabilities.  “Intuitively obvious” may apply to the operation of an Apple iPod as stated by Steve Jobs at its introduction, but that phrase does not apply the running of an organization.

Finally, a compliance program is not all about checking a box.  I know most auditors/assessors seems to operate that way and most executives want it to work that way, but a proper compliance program should never, ever work that way.  Compliance means looking at all of the organization’s protective, detective and corrective controls (the control triad) and determining if they are: (1) functioning properly, (2) designed properly, (3) minimizing the risks and (4) in need of any new controls or changes/enhancements to existing controls to make them function more accurately or efficiently.  While you agreed with our findings regarding the control issues we identified, your argumentative behavior about them seems to indicate otherwise.

I wish you and your organization the best of luck because it seems that your idea of risk management is to rely on luck.  I would like to tell you that you will succeed with that approach, however the statistics say otherwise.


Your Frustrated Assessor

This was cross-posted from the PCI Guru blog. 

Copyright 2010 Respective Author at Infosec Island]]>
FAQs Concerning the Legal Implications of the Heartbleed Vulnerability Wed, 16 Apr 2014 15:01:37 -0500 Contributors to this post include:  Scott KollerDavid NavettaMark Paulding and Boris Segalis)

By now, most of the world is aware of the massive security vulnerability known as Heartbleed (it even comes with a slick logo and its own website  created by the organization that discovered the vulnerability).  According toreports this vulnerability has been present for two years with respect to approximately two-thirds of the servers on the Internet (those that utilizeOpenSSL version numbers 1.0.1 through 1.0.1f and 1.0.2-beta1).  Mashable is keeping a list of some prominent affected sites, including a status on their remediation efforts.  As discussed further below, this vulnerability, if exploited, could lead to the compromise of authentication credentials (e.g. usernames, passwords, encryption keys), and in some cases the unauthorized access or acquisition of information contained in communications sent over the Internet from, or stored on, compromised sites.

In short, the security of millions of organizations is likely affected by Heartbleed in three ways:

  • Communications to the Organization’s Servers.  The communications to and from the systems of organizations that utilize certain versions of OpenSSL may be at risk of interception.
  • Communications by an Organization’s Employees to Third Party Organizations Affected by Heartbleed.  The authentication credentials of personnel and information sent by an organization’s employees to business-related websites subject to Heartbleed (e.g. Dropbox) may be at risk.  If an employee logged into such a site his or her password could have been compromised, and hackers could also have obtained access to information sent by the employees over encrypted SSL channels and information on the business site itself.
  • Communications by Employees to Organizations Affected by Heartbleed During their Personal Use of the Internet.  An employee visiting a website affected by Heartbleed (e.g. Google and thousands of other common consumer sites) during their personal Internet use (at the home, office or offsite) could have had his or her username and password compromised.  If that employee uses the same username and password to log into his employer’s systems, those systems could also be at risk.

In addition to the serious security concerns implicated by Heartbleed, there may be legal consequences associated with this vulnerability, especially with respect to the potential unauthorized access or acquisition of personal information.  At this juncture it is imperative that affected organizations remediate the Heartbleed vulnerability, communicate with their customers, employees and other system users, and consider potential legal risks and obligations associated with Heartbleed.

In this blogpost we present some key FAQs concerning the security and resulting legal implications of Heartbleed.  Specifically, we address remediation efforts necessary to reduce security and legal risk associated with Heartbleed, password reset and communications to affected individuals, the applicability of breach notification laws, and potential investigation obligations under HIPAA’s Security Rule.


What type of information may have been exposed?  Does this breach expose “personal information” under breach notification laws?

The Heartbleed vulnerability does not target a specific file or type of information. Instead, the information exposed is whatever was in the computer’s active memory (not hard drive storage space) at the time of the attack. For every request made by the attacker, the Heartbleed-vulnerable computer will respond with the contents of up to 64KB of memory. Because of this limitation the primary targets of an attacker exploiting Heartbleed are not files or documents. Instead, the real targets of this attack are usernames, passwords, and encryption keys because they are regularly stored in active memory, small enough to fit within 64KB of data, and can be leveraged to access even more information.

Under California’s breach notice law, “personal information” is defined to include the following:

 A user name or email address, in combination with a password or security question and answer that would permit access to an online account.

As such, the exploitation of Heartbleed that captures this information could trigger breach notification in California. Moreover, if authentication credentials compromised as a result of Heartbleed are further leveraged, highly sensitive information on an organization’s systems or affected third party systems may be at risk, including trade secrets, intellectual property and personal information (essentially anything accessible using the authentication credentials at issue).  Therefore, if Heartbleed is exploited, organizations that regularly transmit or receive personal information may be at risk, and may have to notify affected individuals.

Is an attack using the Heartbleed vulnerability detectable?

Now that security professionals know what to look for, it is possible to detect and prevent an attack that uses the Heartbleed vulnerability. However, it is substantially more difficult to determine if an attack occurred in the past (and the vulnerability has been around for 2 years).

A Heartbleed attack leaves no trace on the target computer and signs of an attack are normally not logged. It is possible to detect if you previously captured all incoming SSL traffic or maintained very detailed server logs on TLS/SSL “handshakes.” However, most sites do not record their SSL traffic because the volume of information can be prohibitively expensive to maintain. Even those sites that record their SSL traffic or maintain detailed server logs, only do so for a short period of time due to the amount of data generated.  As a result, most organizations will be able to confirm the presence of the Heartbleed vulnerability, but unable to determine if it was actually used.

What should be done in response to the Heartbleed vulnerability?

At this point, Heartbleed is a commonly known vulnerability (whether that was true or not prior to its recent discovery is unclear).  Hackers, fraudsters and other nefarious forces, if they were not already doing it, are likely to launch attacks.  The longer an organization goes without remediating the vulnerability the greater the likelihood of a security breach and resulting legal liability and risk.  Organizations should move swiftly to address Heartbleed and reduce their risk, and should consider the following actions:  

  • Identify Vulnerable Systems (if any). The first step is to assess whether you have any vulnerable systems. Typical desktop or laptop computers are not affected by this exploit. The Heartbleed vulnerability is present on computers running OpenSSL version numbers 1.0.1 through 1.0.1f and 1.0.2-beta1. It also affects certain network hardware such as routers, switches, and firewalls. Be sure to check with the manufacturer to see if you have any vulnerable networking equipment or check the list maintained by Mashable by clicking here.  Notably, many organizations (about one-third of Internet sites) do not use Open SSL.  While they may not need to take action to directly address the vulnerability, they may still be at risk (as described further below).
  • Test Servers for the Heartbleed Vulnerability (time and resource permitting).  Next, the most common response is to immediately patch your servers.  However, if you have the time and resources available, you should test your servers for the vulnerability in the pre-patched state. Not every implementation of OpenSSL is susceptible and the ability to confirm if your systems were vulnerable in the first place will be extremely valuable.
  • Patch Affected Servers.  Upgrade to the latest version of OpenSSL and install the latest patch or firmware on all affected computers and hardware.   OpenSSL has made the patch available here (the upgrade with the patch is OpenSSL 1.0.1g.).   After you have patched or upgraded the affected equipment, be sure to restart and re-test the equipment to ensure the vulnerability is closed.
  • Revoke and Regenerate Existing SSL Certificates and Encryption Keys. Do not regenerate these certificates until after you have upgraded to the latest version of OpenSSL or you will jeopardize the security of the new certificate all over again.
  • Password Reset.  Finally, you will need to decide whether to notify individuals to reset their passwords (e.g. employees and customers) or reset their passwords yourself. Here, the results of your pre-patch testing will come into play and should help you make an informed decision. The key is to take a close look at the computers that were affected and the potential information that was exposed. A corporate mail server will store substantially different information that an online marketplace. The type of information accessible by the affected computer should help guide your decision. Even if your systems were not affected, your customers and employees may expect a response given the wide-spread publicity surrounding this exploit.

Our systems were not affected by the Heartbleed vulnerability.  Do I need to do anything?

Unfortunately, because of the widespread nature of this vulnerability, even organizations unaffected by this exploit should consider taking steps to address potential security risk arising out of Heartbleed.

If any organization’s employees has dealings with a third party organization with the Heartbleed vulnerability, the employee’s credentials used to log into the third party’s systems may have been compromised.  If that third party service stores sensitive information or personal information related to the organization it could be at risk.  Moreover, if the organization’s employee uses the same password on the third party site to log into the organization’s system, the organization’s system and information could be at risk.

Even personal Internet use (at home, at work or otherwise) could expose an organization’s systems and information.  Again, if an employee uses the same log in credentials for his or her personal Internet use,  then the capture of those credentials could allow access to the organization’s systems and information.

Even if the organization was not directly vulnerable to Heartbleed, it should consider requiring its employees to reset their authorization credentials.  In addition, organizations should direct their employees to not use the same password at work that they do for their personal sites and use (regardless of Heartbleed, this is a good idea anyway).

When should I tell my employees, customers and others to change their passwords?

Individuals should not change their passwords until after the relevant Heartbleed-affected site has been patched.  If an individual changes his or her password before the Open SSL patch is operational then the new password will be at risk of compromise.  Since so many websites and servers have been impacted by Heartbleed, this timing issue poses challenges.  For affected third party sites, one must attempt to ascertain whether the site has implemented the patch.  Some organizations are attempting to track the date when some higher profile websites have implemented the OpenSSL patch.

What should I tell my employees, customers and others about resetting their passwords?

To cover all the scenarios outlined above, you should consider communicating the following to your employees, customers and other affected individuals:

  • Reset Password to Organization’s Systems.  After the organization has implemented the OpenSSL patch it should tell its affected employees and customers to change their passwords (or force a password reset of such systems).
  • Reset Passwords to Third Party Affected Systems used by the Organization.  To the extent employees accessed Heartbleed-affected third party sites used by the organization, you should ask your employees to change their passwords to those systems.  You may also have to inform your employees concerning the timing issues discussed above – they should be told to change passwords only after the affected third party site has implemented the OpenSSL patch.
  • Employees that Use the Same Password for Work and Personal Use Should Change their Passwords.  Even if the organization’s own system was not affected by Heartbleed, and even if the organization does not utilize third party sites affected by Heartbleed, companies should consider asking employees and customers to change their organizational password if they use the same password for personal reasons.

Does the Heartbleed vulnerability trigger U.S. breach notification laws?

Breach notification laws define “security breach” in different ways.  For example, under California’s breach notice law, a “breach of the security of the system” is defined as:

unauthorized acquisition of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by the person or business.

Other states also define a security breach in terms of “unauthorized access” (as opposed to acquisition), including Connecticut’s law which defines “breach of security” as:

unauthorized access to or unauthorized acquisition of electronic files, media, databases or computerized data containing personal information when access to the personal information has not been secured by encryption or by any other method or technology that renders the personal information unreadable or unusable.

Finally, under HIPAA HITECH, “breach” is defined as:

the acquisition, access, use, or disclosure of protected health information in a manner not permitted under subpart E of this part [the HIPAA Privacy Rule] which compromises the security or privacy of the protected health information.

45 CFR §164.402.  Based on the foregoing, standing alone, the fact that an organization was vulnerable to the Heartbleed exploit does not trigger U.S. breach notification laws.  However, if an entity discovers evidence that Heartbleed was exploited in a manner leading to unauthorized access or acquisition of “personal information,” notification may be required under State or Federal breach notification laws.

Does the HIPAA Security Rule Require Investigations to Determine Whether the Heartbleed Vulnerability Was Exploited?

As noted above, the Heartbleed vulnerability does not, in and of itself, constitute unauthorized access or acquisition of personal information subject to most state and federal data breach notification requirements, including the HIPAA Data Breach Notification Rule.  However, the HIPAA Security Rule contains a number of provisions that require covered entities and business associates to maintain procedures to monitor system activity for potential security incidents and investigate any such potential security incidents.

The HIPAA Security Rule requires covered entities and business associates to “regularly review records of information system activity, such as audit logs, access records, and security incident tracking reports.”  45 C.F.R. § 164.306(a)(1)(ii)(D).  HHS guidance materials further state that this specification “should also promote continual awareness of any information system activity that could suggest a security incident.”  CMS, HIPAA Security Series Vol. 2 Security Standards:  Administrative Safeguards, p. 7 (March 2007),available here.

HHS has not opined on any obligation to review historical system activity records to detect past attempts to exploit the newly discovered vulnerabilities.  Nonetheless, based on the Security Rule and associated guidance, covered entities and business associates should consider whether it is appropriate to review the available historical web server logs to identify attempts (if any) to exploit the Heartbleed vulnerability.

Nevertheless, such reviews may only be necessary if the covered entity or business associate has maintained the types of web server logs that are likely to record indicators of Heartbleed exploitation (discussed in greater detail in the FAQs above).  The HIPAA Security Rule requires covered entities and business associates to create and maintain appropriate records of system activity.  See 45 C.F.R. 164.312(b).  However, covered entities and business associates have significant discretion to create and maintain activity records based upon the formal assessment of their security risks.  Given the volume and historical insignificance of the relevant log data and widespread trust of OpenSSL among technologists, it would have been reasonable for covered entities and business associates not to retain persistently web server log data regarding communications “handshakes.”


Overall, in most cases, the Heartbleed vulnerability and associated security and legal risk is manageable as long as organizations take swift action to remediate their risk. That said, it is possible that some organizations have been subject to Heartbleed attacks, and more likely that hackers and other criminal elements will seek to exploit Heartbleed going forward. InfoLawGroup is currently answering questions for its clients and helping with communication strategy and management of this situation, and we are available to answer additional questions and provide support as needed.

This was cross-posted from the Information Law Group blog.

Copyright 2010 Respective Author at Infosec Island]]>
Security Pros Need Better Security Awareness Training Options Wed, 16 Apr 2014 11:59:36 -0500 By: David Meltzer 

One of the basic security measures that every company should be taking is giving security awareness training to its employees. This is part of Critical Security Control 9. CSC 9-3 says:

“Implement an online security awareness program that (1) focuses only on the methods commonly used in intrusions that can be blocked through individual action, (2) is delivered in short online modules convenient for employees (3) is updated frequently (at least annually) to represent the latest attack techniques, (4) is mandated for completion by all employees at least annually, and (5) is reliably monitored for employee completion.”

So I wasn’t the least bit surprised a few years ago when my team of security researchers was asked to take security awareness training. But, it did seem funny that this group, the most security aware people I know, were taking the same training as the HR team.

Not that I think they minded that much, as the online training quickly became a game of how could you hack the online training to give you a passing score without having to sit through an hour of videos (hint: although you may not be able to comprehend 15 videos playing simultaneously on your screen, it does substantially reduce how long it takes to watch 15 videos).

This does seem to be a common issue, though, and so I think it is valuable to give some options to the security pros in your organization. There is an opportunity to turn what is for them an annoying waste of an hour into something productive and valuable to the business. Here are a few ideas for implementing this control:

Give an “Advanced Option” as Part of Awareness Training

Anyone that really knows their stuff with security would probably opt to learn something useful instead of rehash what they already know. I recently made a technical report about how large organizations implement vulnerability management mandatory reading for one of my teams.

That probably took about as much time to read as an online awareness training class would take, but for this group I’d say it is far more valuable to educate them on one particularly relevant area of security, rather than cover the basics again. An advanced option could come from the same training system as your basic class, or it could be as simple as an instruction to watch a webinar or read a new report on an area of security.

Offer a Security Project in Lieu of training

For the security pros, your organization will probably get more out of them if they do something for security instead taking a class. We do brown bag trainings at our office, so if someone is willing to spend an hour teaching others about an area of security they are an expert at, why not let that fulfill their awareness duty for the year?

Or how about an assignment to design some posters reminding people of key security basics and put them up? If a security pro is willing to spend the time to do something to increase awareness for the organization, let them do it!

Turn Security Awareness into Continuing Security Education

The 20 CSC suggest reiterating training with updates annually, but many organizations have a tendency to repeat the exact same regimen every year. What about creating a program in your organization that encourages and offers continuing education around security for your employees, instead of simply repeating the same training options?

Those with professional certifications in security, like a CISSP, know that Continuing Professional Education credits are required to stay certified. That does not mean they need to go back over the original certification materials.

Whether you have the budget to offer external training opportunities to employees, or spend time creating a simple internal system of training credits, giving security pros the option to be exempt from awareness training as long as they have completed some security education in the previous time period makes sense.

Some small tweaks like this can make your security pros a little more appreciative of their company’s own policies, while benefiting the company at the same time – a cheap win-win.

So the next time that reminder goes out to your employees that they need to complete the annual security awareness training, spend a few minutes thinking about how you can make your pros happy while keeping them educated instead.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Would a Proprietary OpenSSL Have Been More Secure than Open Source? Wed, 16 Apr 2014 11:51:17 -0500 The OpenSSL Heartbleed vulnerability has resurrected the age-old debate of whether or not open source code is more or less secure than proprietary code. Before putting on your open source or proprietary jerseys and launching into this (frankly not-very-productive) fight, first consider a few things.

Those software companies that have proprietary software give their compelling reasons for why it is more secure. We must take their word for the security of their code, unless they release copies of objective 3rd-party audits and/or there are security incidents that occur with their software that reveal their inadequacies. There have been many incidents with, and there have been few released audits of, proprietary software.

Those who think open source is more secure point to the ability to pick apart the security of open source code by a much wider range of folks, with many differing levels of capabilities, than what would be available for proprietary code, which would logically seem to lead to many more eyes finding more security flaws more quickly. And so, should’ve/would’ve found the OpenSSL flaw prior to production launch. However, even with the eyes of the world *available* to find the flaws, a simple error went unnoticed, at least by the good guys and gals, for over two years.

What is important to consider, beyond the fact that the flawed code was open source, was that OpenSSL was/is arguably the overwhelmingly most-used security software code on the Internet, basically as the de facto standard for communications network and devices, but not strictly on the Internet since it is also used in routers and other networked devices that may be within non-public networks. It doesn’t really matter at this point if it is open source or proprietary; it is being used by reportedly hundreds of millions of devices and servers. The obligation, and importance, for effective security controls is the same regardless.

The important consideration is this: why wasn’t this very simple error (and if you’ve ever programmed, you know this was one of the most basic, simple errors of Programming 101 that you could ever make; in general not doing a length check) caught before the code was widely distributed within production?

1) Why didn’t the programmer do a proper code review?

Dr. Seggelmann, who admitted to the error, had worked on previous versions of the code, with no errors. He has also caught security flaws in previous versions and fixed them prior to their production release. He is a brilliant man, whose work is almost spotless with regard to demonstrating coding due diligence, until this very simple, very elementary, coding error resulted in possibly the most far-reaching and biggest error with the biggest impacts (that we’ll never truly know since exploits leave no indicators) that we’ve ever seen. This was not some lazy, sloppy programmer. This was an expert who made a simple, but hugely-significant, coding mistake.

LESSON: Anyone, anywhere, working on Open Source or proprietary code, could have made this mistake. Humans are not perfect; none of us. We will all make mistakes. (We hope that when we do, it does not impact hundreds of millions of systems, and people, worldwide.)

2) Why didn’t the quality check folks catch the error?

The team that maintains the code is within the Internet Engineering Task Force (IETF), who volunteer their time generally with the goal of creating code that is better and more secure to use on the Internet. They are are supposed to review and test code to approve of it prior to releasing it to production, and have processes in place (supposed to) to catch this type of very simple error. So, why didn’t they catch it? Were they looking for errors in the more technical, and complex, areas of the code without thoroughly reviewing the more mundane areas? Did they only do a spot check of the code (which I know from my audits many programmers are now doing to be more “agile” and quick with their reviews)? Did they not go into depth because they knew Dr. Seggelman was really smart, and had always done a great job, and assumed that with his demonstrated past diligence that he would not have made an error? Perhaps a combination of all these considerations?

LESSON: Quality assurance and security reviews must follow consistent procedures, with the reviewers using more scrutiny, not less, for every line of code. Assumptions (may or may not be the case with Heartbleed) cannot be made based upon the programmer’s history of coding excellence. Review must not be cut short, or be a spot check, in order to save time.

3) With the eyes of the world available for reviewing, why did it take 2 years to be discovered?

Even with a review team within the IETF, it would seem that code used by hundreds of millions should be immediately under the scrutiny of the freelance/volunteer/(whatever you want to call them) reviewers. Aren’t there teams waiting to pick apart open source code that is used by hundreds of millions? This is one of the arguments for open source; that the coding and software experts in the public would discover flaws right away that got by the IETF programmers. (With proprietary you don’t have the ability for the public to scrutinize the code; one of the common arguments against proprietary code.)  Wouldn’t the U.S. CERT be interested in keeping an eye on such code? Wouldn’t various other groups? Isn’t it incredible that none of the good guys discovered / knew about this flaw right away and reported it? It took 2 years! Really incredible.

Lesson: Even if the world is *able* to review and pick apart code, it does not mean anyone actually will. Independent review groups need to commit to reviewing security code when it is released.

Some Other Basic Lessons Re-Learned

Some of the hard lessons necessary to be re-learned by Heartbleed, regardless of whether you use open source or proprietary:

Programmers must follow consistent and repeatable processes for creating code, and building security controls into the code.

The change control/approval process must include an independent review from others within the team, and/or from outside, who will consistently follow a review and testing process that will catch security errors, especially ones as basic as the Heartbleed flaw.

When open source code is used for widely-used purposes, there should be outside groups that jump right on the code as soon as it goes public to pick it apart and try to find the most complex, down to the most basic, flaw.

Proprietary code also needs to have a reliable, independent, security review process.

This was cross-posted from the Privacy Professor blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Is User Experience Part of Your Security Plan? Tue, 15 Apr 2014 12:01:27 -0500 By: Dwayne Melancon

At the 2014 SourceBoston conference, I sat in on a presentation by a cryptographer named Justin Troutman. In this presentation, Justin talked very little about the technical aspects of cryptography and very much about the user experience.

I think he’s onto something.

One common theme across the information organizations that I work with is that everyone wants to do the right thing. Unfortunately, that often means onerous and complicated security policies that translate into awkward steps that users must take. As we all know, the more complicated your security policy is the more likely people will find creative ways to get around it.

The speaker pointed out that in cryptography, a lot of the algorithms have been time-proven. In other words, the cryptography algorithm itself is generally not the issue when security problems arise. Rather, the issues arise from flawed implementations, such as the OpenSSL vulnerability reported this week; or overly complex instantiation of the crypto in a product.

For the second item, he cited the confusing and overly technical choices of cryptography used in products like TrueCrypt — see the picture below: how is the average user going to choose from these crypto algorithms, much less know which hash to choose in the next field?


According to Troutman, all of these algorithms are likely secure enough to meet your needs and these choices are focusing users’ attention (and developers’ attentions, perhaps) in the wrong place.

Simple is probably better than spoiled for choice. After all, making it easy for more users to use good enough security in a much more streamlined manner is probably a good way to increase the overall security of your data and systems, right?


Think about the policies that you impose in your organization. Are any of them overly complex or onerous? A good indicator may be the frequency with which you have to discipline, chastise, explain, orr otherwise deal with them amongst your user population. If you have to pay too much attention to the minutiae around your policies to keep them effective, you may be suffering from a bad user experience.

I don’t think there’s a silver bullet in solving this problem, but I do recommend involving user focus groups or feedback panels in the development of the limitation of your security policies. Explain what you’re trying to achieve with the policies in terms the users can understand, as well as what you’re proposing in terms of implementation details and enforcement.

An honest, open dialogue between you and the average user could come up with some workable solutions to achieve your goals while making the policies less invasive for (or at least better adopted by) your user population.

This was cross posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
SIEM Webinar Questions – Answered Tue, 15 Apr 2014 11:43:18 -0500 Last year, I did this great SIEM webinar on “SIEM Architecture and Operational Processes” [free access to recording! No Gartner subscription required] and received a lot of excellent questions. This is the forgotten post with said questions.

The webinar was about “Security information and event management (SIEM) is a key technology that provides security visibility, but it suffers from challenges with operational deployments. This presentation will reveal a guidance framework that offers a structured approach for architecting and running an SIEM deployment at a large enterprise or evolving a stalled deployment.”

Before the attendee Q&A, I asked one question myself:

Q: Are you satisfied with your SIEM deployment?


You make your own conclusions from that one.

And here is the attendee Q&A:

Q: Do you have tips for starting log management (for SIEM) in a heavily outsourced environment, so where most servers, routers, firewalls etc are managed by 3rd parties?

A: Frankly, the central problem in such environments is about making changes to systems. Can you change that /etc/syslog.conf or that registry setting when needed, quickly and efficiently? Beyond that, I’ve seen outsourced IT with good log monitoring and I’ve seen traditional IT with bad one, so its success is not chained to the delivery model. If anything, I’d watch the outsourced environment more closely since “if you cannot control what they do, at least monitor them.”


Q: To what degree do you think it is realistic, and to what degree useful, to collect and analyze logs from Windows workstations (endpoints), rather than just servers?

A: It used to be rare, and it is still not that common, but more organizations are doing it. In fact,ETDR tools has emerged to collect even more security telemetry from the endpoints , including plenty of activities that are cannot be logged. In general, desktop/laptop [and soon mobile?] logging is much more useful now than it used to be. Also, the SIEM tool scalability (in both raw EPS and logging devices/sources) is better now and thus enables more user endpoint logging.


Q: For a mid-size company what percent of time would a typical SIEM analyst spend in monitoring / management of the tool – outstanding incident management.

A: Look at my SIEM skill model of Run/Watch/Tune and the paper where it is described in depth.Ideally, you don’t want to have one person running the SIEM system, doing security monitoring and tuning SIEM content (such as writing correlation rules, etc) since it would be either one busy person or one really talented one. Overall, you want to spend a small minority of time on the management of the tool and most of the time using it. SIEM works if you work it! SIEM fails if you fail to use it.


Q: How do you reconcile (at a high level) an overall SIEM effort with a Syslog or “IT search” type tool selection? We have enterprise architects who say our Operational Intelligence should include SIEM, but Ops and Security aren’t on that same page.

A: Nowadays, we see both “SIEM for security, general log management for ops” and “single system for both ops and security.” Organizations may choose to run a separate system for operational logging (as well as a SIEM) or choose to run one system and feed the logs from one place into different components. Many, many organization are still very silo’d and would actually prefer to do separate log collection in separate tools. Theoretically, this is unhealthy and inefficient, but for many organizations this is also the only way they can go…


Q: What kind of role do you see “Security Analytics” or the new generation SIEM solutions playing versus the traditional SIEM solutions? What kind of market adoption are you seeing of these new solutions versus the traditional SIEM ones?

A: In our recent paper on the topic, we tried to predict the same evolution as well as reconcile such SIEM evolution with new tools leveraging big data technologies and new analytic algorithms. At this point, new analytic approaches remain for the “Type A of Type A” organization with the most mature, well-funded and dedicated security operations teams (example). Many organizations can barely operate a SIEM and are nowhere near ready for the big data-style tools and methods. In essence, “if you think SQL is hard, stay outside of a 5 mile radius from Hadoop.” See this post for additional predictions and this one for a funnier take on this topic.


Q: Is SIEM dead or going to die? if yes, what other tools can you use for these SIEM-type use cases?

A:Not at all! SIEM is alive and happy, growing and evolving.


There you have it, with a slight delay : – )

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
Electric Grid Safety Hinges on Partnership and Information Sharing Mon, 14 Apr 2014 18:54:18 -0500 WASHINGTON -- Electric utilities have been focused on improving the safety and reliability of the complex and dynamic electric grid for years, testified Sue Kelly, president and CEO of the American Public Power Association (Public Power) at a Senate Energy and Natural Resources Committee hearing today. Kelly testified on behalf of investor-owned, cooperatively owned, and publicly owned utilities, as well as independent generators and Canadian utilities.  The industry's top priority is to protect critical power infrastructure from cyber and physical threats by partnering with all levels of government and sharing critical information, she said.

"Keeping the lights on for customers is of paramount importance to electric utilities. Because electricity is produced and consumed instantaneously and follows the path of least resistance, ensuring reliability and grid security is a collective affair," said Kelly.

The hearing, "Keeping the Lights On — Are We Doing Enough to Ensure the Reliability and Security of the U.S. Electric Grid?" was convened by the Senate Energy and Natural Resources Committee headed by Sen. Mary Landrieu (D–La.), with ranking member Sen.Lisa Murkowski (R-Ala.).

Kelly explained the robust measures electric utilities already have in place to address physical and cybersecurity and outlined how these measures have remained responsive to evolving threats over the years.

Recent media reports profiled attacks on physical infrastructure including the incident at Pacific Gas and Electric's Metcalf substation in California. While electric utilities take this incident seriously, the notion that media stories have spurred action on grid security is inaccurate, Kelly noted. Well before the media reports, government and industry initiated a series of briefings across the country to help utilities and local law enforcement learn more about the Metcalf attack and its potential implications.

On March 7, 2014, the Federal Energy Regulatory Commission (FERC) directed the North American Electric Reliability Corporation (NERC) under Section 215 of the Federal Power Act (FPA) to submit proposed reliability standards on physical security of critical assets within 90 days.  Investor-owned, cooperatively owned, publicly owned utilities, and other industry stakeholders are participating in the NERC process to develop this important standard.

The key to electric utility physical security is a "defense-in-depth" approach, which relies on resiliency, redundancy and the ability to recover, should an extraordinary event occur, Kelly said. The industry applies a similar "defense-in-depth" approach to cyber-security to ensure a quick response if an attack occurs. As there are more than 45,000 substations inthe United States, prioritizing the most critical assets and focusing security planning on them is very important, explained Kelly. She noted that cybersecurity must be an iterative process, as the nature of threats constantly evolves.

Cybersecurity of the electric grid can be enhanced by improving information sharing between the federal government and industry, emphasized Kelly. The Electricity Sub-sector Coordinating Council (ESCC), a public/private partnership between the utility sector and the federal government, plays an essential role in coordination and information sharing. The ESCC has representatives from electricity trade associations, utilities and regional transmission organizations.

"The only way industry participants on the ground can truly protect against an event is to be aware of a specific threat or concern. They know which of their assets are critical. They know what they need to do to protect against the majority of physical and cyber threats," explained Gerry Cauley, CEO of the North American Electric Reliability Corporation who also testified at the hearing. "However, if the government is aware of a specific threat, communicating that information to those individuals on the front lines is important. This communication differs from providing public access to sensitive information, but is an essential component of security protection," he added.

Others who testified were Cheryl LaFleur, FERC acting chair; Colette Honorable, National Association of Regulatory Utility Commissioners president; and Phil Moeller, FERC commissioner.

Kelly's full testimony is available online

SOURCE American Public Power Association

Copyright 2010 Respective Author at Infosec Island]]>
Rx for Incorrect Compliance Claims and XP Mon, 14 Apr 2014 15:46:20 -0500 In the past couple of weeks I’ve gotten a couple dozen questions from my clients that are small to midsized covered entities (CEs) or business associates (BAs) under HIPAA, in addition to several small to midsized start-ups that provide services in other industries.  And, while some of these concerns are arising out completely erroneous advice, regrettably, some of the questions resulted from my own mea culpa of writing a confusing sentence in my last blog post, for which I’ve since provided a clarification within. (Lesson: I need to spend more time double-checking/editing text prior to posting after doing edits to cut the length.) I apologize for any confusion or alarm that may have arisen as a result.

However, this does provide a good opportunity to examine in more depth the compliance issues related to Windows XP use, and the related questions I’ve received.  The following are the most common questions I’ve answered in the past several days.

Did I automatically become HIPAA non-compliant on April 8 if I still have XP systems?

No, it is not correct to say that organizations that still have Windows XP automatically became HIPAA non-compliant after April 8. HIPAA does not specify the types or versions of operating systems (OSs) that must be used. The Department of Health and Human Services (HHS) which oversees HIPAA compliance, has stated multiple times within multiple venues over the years that it does not, and will not (at least in the foreseeable future) mandate specific operating systems in order to allow for reasonable flexibility for CEs and BAs to meet compliance requirements.

However, HIPAA does require organizations to identify their information security risks (using a risk management program and by performing risk assessments), and then to mitigate those risks appropriately.  Using XP could be considered a high-risk action if you are supporting healthcare activities with the system, and so would be something that would likely be reported as an audit finding, and would likely be identified as a High Risk within any risk assessment that occurs.

I advise all organizations to identify their systems running XP, determine the risks to PHI of those systems, and then establish a plan to upgrade appropriately and in the nearest time feasible.

Do I have to report my XP system as a breach to the HHS?

While continuing to run Windows XP is a risk, it is not a reportable breach of PHI.  A breach is much different than a risk.  Generally, a privacy breach is when unauthorized use or access to PHI, or some other type of personal information, has, or may have (such as in stolen laptop with PHI, or you know a hacker got into the network and may have gotten into a patient database), occurred.

Using XP will usually be considered a high-risk practice, and so would be something that would likely be reported as an audit finding, and would be typically be identified as a “High Risk” within any risk assessment that occurs.

Do I need to upgrade from XP on my computer that processes credit card payments?

This is slightly more straightforward than with the HIPAA, and other regulatory, compliance questions, the answers for which are (rightly so) based on related risks.  If you are processing credit card payments, then you almost certainly need to comply with the Payment Card Industry Data Security Standard (PCI-DSS).

PCI DSS Requirements 6.1 and 6.2 cover the need to keep systems up to date with vendor-supplied security patches. This helps to protect systems from known vulnerabilities.  When operating systems (such as XP) are no longer technically supported, security patches may no longer be available to protect the systems from known exploits, resulting in you not meeting these requirements to keep your system patched to protect against known vulnerabilities.


Depending upon your system and what you use it for, there is the possibility that you may be able to implement compensating controls to acceptably mitigate the risks that will occur with the use of XP, and then, if you are successful with the mitigation actions, to meet the intent of the 6.1 and 6.2 requirements. You will need someone with in-depth knowledge of information security, Windows XP and the associated IT systems to be involved to ensure such compensating controls are truly effective.


Bottom line for all businesses of all sizes…

It is not a privacy breach to be running XP, and you did not automatically fall into HIPAA non-compliance on April 8 if you have not upgraded all your Windows XP systems. You will not automatically fall into non-compliance with any other regulations either, but the PCI-DSS standard will result in non-compliance if you do not take immediate and effective risk mitigation actions.

You need to determine the associated risks with running XP after it is no longer supported; there will be risks that need to be addressed.  And, it is important to understand that the risks will increase as time goes since Window XP support stopped on April 8, 2014. Make plans now to upgrade to a new, supported OS as soon as possible, based upon consideration of all these risks. 

This was cross-posted from the Privacy Professor blog.

Copyright 2010 Respective Author at Infosec Island]]>
FBI Plans to Have 52 Million Photos in its NGI Face Recognition Database by Next Year Mon, 14 Apr 2014 12:02:33 -0500 New documents released by the FBI show that the Bureau is well on its way toward its goal of a fully operational face recognition database by this summer.

EFF received these records in response to our Freedom of Information Act lawsuit for information on Next Generation Identification (NGI)—the FBI’s massive biometric database that may hold records on as much as one third of the U.S. population. The facial recognition component of this database poses real threats to privacy for all Americans.

What is NGI?

NGI builds on the FBI’s legacy fingerprint database—which already contains well over 100 million individual records—and has been designed to include multiple forms of biometric data, including palm prints and iris scans in addition to fingerprints and face recognition data. NGI combines all these forms of data in each individual’s file, linking them to personal and biographic data like name, home address, ID number, immigration status, age, race, etc. This immense database is shared with other federal agencies and with the approximately 18,000 tribal, state and local law enforcement agencies across the United States.

The records we received show that the face recognition component of NGI may include as many as 52 million face images by 2015. By 2012, NGI already contained 13.6 million images representing between 7 and 8 million individuals, and by the middle of 2013, the size of the database increased to 16 million images. The new records reveal that the database will be capable of processing 55,000 direct photo enrollments daily and of conducting tens of thousands of searches every day.

NGI Will Include Non-Criminal as well as Criminal Photos

One of our biggest concerns about NGI has been the fact that it will include non-criminal as well as criminal face images. We now know that FBI projects that by 2015, the database will include 4.3 million images taken for non-criminal purposes.

Currently, if you apply for any type of job that requires fingerprinting or a background check, your prints are sent to and stored by the FBI in its civil print database. However, the FBI has never before collected a photograph along with those prints. This is changing with NGI. Now an employer could require you to provide a “mug shot” photo along with your fingerprints. If that’s the case, then the FBI will store both your face print and your fingerprints along with your biographic data.

In the past, the FBI has never linked the criminal and non-criminal fingerprint databases. This has meant that any search of the criminal print database (such as to identify a suspect or a latent print at a crime scene) would not touch the non-criminal database.  This will also change with NGI. Now every record—whether criminal or non—will have a “Universal Control Number” (UCN), and every search will be run against all records in the database. This means that even if you have never been arrested for a crime, if your employer requires you to submit a photo as part of your background check, your face image could be searched—and you could be implicated as a criminal suspect—just by virtue of having that image in the non-criminal file.  

Many States Are Already Participating in NGI

The records detail the many states and law enforcement agencies the FBI has already been working with to build out its database of images (see map below). By 2012, nearly half of U.S. states had at least expressed an interest in participating in the NGI pilot program, and several of those states had already shared their entire criminal mug shot database with the FBI. The FBI hopes to bring all states online with NGI by this year.

Map of US States Coordinating with FBI on NGI Face Recognition

The FBI worked particularly closely with Oregon through a special project called “Face Report Card.” The goal of the project was to determine and provide feedback on the quality of the images that states already have in their databases. Through Face Report Card, examiners reviewed 14,408 of Oregon’s face images and found significant problems with image resolution, lighting, background and interference. Examiners also found that the median resolution of images was “well-below” the recommended resolution of .75 megapixels (in comparison, newer iPhone cameras are capable of 8 megapixel resolution).

FBI Disclaims Responsibility for Accuracy

At such a low resolution, it is hard to imagine that identification will be accurate.1 However, the FBI has disclaimed responsibility for accuracy, stating that “[t]he candidate list is an investigative lead not an identification.”

Because the system is designed to provide a ranked list of candidates, the FBI states NGI never actually makes a “positive identification,” and “therefore, there is no false positive rate.” In fact, the FBI only ensures that “the candidate will be returned in the top 50 candidates” 85 percent of the time “when the true candidate exists in the gallery.”

It is unclear what happens when the “true candidate” does not exist in the gallery—does NGI still return possible matches? Could those people then be subject to criminal investigation for no other reason than that a computer thought their face was mathematically similar to a suspect’s? This doesn’t seem to matter much to the FBI—the Bureau notes that because “this is an investigative search and caveats will be prevalent on the return detailing that the [non-FBI] agency is responsible for determining the identity of the subject, there should be NO legal issues.”

Nearly 1 Million Images Will Come from Unexplained Sources

One of the most curious things to come out of these records is the fact that NGI may include up to 1 million face images in two categories that are not explained anywhere in the documents. According to the FBI, by 2015, NGI may include:

  • 46 million criminal images
  • 4.3 million civil images
  • 215,000 images from the Repository for Individuals of Special Concern (RISC)
  • 750,000 images from a "Special Population Cognizant" (SPC) category
  • 215,000 images from "New Repositories"

However, the FBI does not define either the “Special Population Cognizant” database or the "new repositories" category. This is a problem because we do not know what rules govern these categories, where the data comes from, how the images are gathered, who has access to them, and whose privacy is impacted.

2007 FBI document available on the web describes SPC as “a service provided to Other Federal Organizations (OFOs), or other agencies with special needs by agreement with the FBI” and notes that “[t]hese SPC Files can be specific to a particular case or subject set (e.g., gang or terrorist related), or can be generic agency files consisting of employee records.” If these SPC files and the images in the "new repositories" category are assigned a Universal Control Number along with the rest of the NGI records, then these likely non-criminal records would also be subject to invasive criminal searches.

Government Contractor Responsible for NGI has built some of the Largest Face Recognition Databases in the World

The company responsible for building NGI’s facial recognition component—MorphoTrust(formerly L-1 Identity Solutions)—is also the company that has built the face recognition systems used by approximately 35 state DMVs and many commercial businesses.2MorphoTrust built and maintains the face recognition systems for the Department of State, which has the “largest facial recognition system deployed in the world” with more than 244 million records,3 and for the Department of Defense, which shares its records with the FBI.

The FBI failed to release records discussing whether MorphoTrust uses a standard (likely proprietary) algorithm for its face templates. If it does, it is quite possible that the face templates at each of these disparate agencies could be shared across agencies—raising again the issue that the photograph you thought you were taking just to get a passport or driver’s license is then searched every time the government is investigating a crime. The FBI seems to be leaning in this direction: an FBI employee email notes that the “best requirements for sending an image in the FR system” include “obtain[ing] DMV version of photo whenever possible.”

Why Should We Care About NGI?

There are several reasons to be concerned about this massive expansion of governmental face recognition data collection. First, as noted above, NGI will allow law enforcement at all levels to search non-criminal and criminal face records at the same time. This means you could become a suspect in a criminal case merely because you applied for a job that required you to submit a photo with your background check.

Second, the FBI and Congress have thus far failed to enact meaningful restrictions on what types of data can be submitted to the system, who can access the data, and how the data can be used. For example, although the FBI has said in these documents that it will not allow non-mug shot photos such as images from social networking sites to be saved to the system, there are no legal or even written FBI policy restrictions in place to prevent this from occurring. As we have stated before, the Privacy Impact Assessment for NGI’s face recognition component hasn’t been updated since 2008, well before the current database was even in development. It cannot therefore address all the privacy issues impacted by NGI.

Finally, even though FBI claims that its ranked candidate list prevents the problem of false positives (someone being falsely identified), this is not the case. A system that only purports to provide the true candidate in the top 50 candidates 85 percent of the time will return a lot of images of the wrong people. We know from researchers that the risk of false positives increases as the size of the dataset increases—and, at 52 million images, the FBI’s face recognition is a very large dataset. This means that many people will be presented as suspects for crimes they didn’t commit. This is not how our system of justice was designed and should not be a system that Americans tacitly consent to move towards.

For more on our concerns about the increased role of face recognition in criminal and civil contexts, read Jennifer Lynch’s 2012 Senate Testimony. We will continue to monitor the FBI’s expansion of NGI.

Here are the documents:

FBI NGI Description of Face Recognition Program

FBI NGI Report Card on Oregon Face Recognition Program

FBI NGI Sample Memorandum of Understanding with States

FBI NGI Face Recognition Goals & Objectives

FBI NGI Information on Implementation

FBI Emails re. NGI Face Recognition Program

FBI Emails from Contractors re. NGI

FBI NGI 2011 Face Recognition Operational Prototype Plan

FBI NGI Document Discussing Technical Characteristics of Face Recognition Component

FBI NGI 2010 Face Recognition Trade Study Plan

FBI NGI Document on L-1's Commercial Face Recognition Product

This was cross-posted from the Electronic Frontier Foundation's DeepLinks blog.

  • 1.In fact, another document notes that “since the trend for the quality of data received by the customer is lower and lower quality, specific research and development plans for low quality submission accuracy improvement is highly desirable.”
  • 2.MorphoTrust’s parent company, Safran Morpho, describes itself as “[t]he world leader in biometric systems,” is largely responsible for implementing India’s Aadhaar project, which, ultimately, will collect biometric data from nearly 1.2 billion people.
  • 3.One could argue that Facebook’s is larger. Facebook states that its users have uploaded more than 250 billion photos. However, Facebook never performs face recognition searches on that entire 250 billion photo database.
Copyright 2010 Respective Author at Infosec Island]]>
NSA vs. Cloud Encryption: Which is Stronger? Sat, 12 Apr 2014 18:54:50 -0500
  • Have the recent stories of NSA snooping, data collection, and attempts at breaking encryption made you reconsider how you store and use data in the cloud?
    • Are you wondering what information is being collected (or can one day be collected) about your business?
    • Is the NSA watching?  Do hackers have a way into your systems? Do you need to ease your customers’ fears (or your own)?
    • In these Orwellian times, is there any way to limit the reach of Big Brother?

    I offer: Strong Cloud Encryption.

    Revelations from the NSA leaks shows that the NSA can steal or use the law to demand encryption keys from providers.  The NSA (and possibly other organizations) are not only keeping pace with technology, but also planning for the future of data in the cloud. 

    Business must also be looking and planning for the future.  Starting now. Starting with strong cloud encryption.

    CNN reports that NSA has a number of methods for accessing data: "the use of supercomputers to crack codes, covert measures to introduce weaknesses into encryption standards and behind-doors collaboration with technology companies and Internet service providers themselves." According to CNN, most of NSA's information comes from moles placed in companies, not from technology. This means that the less information the cloud provider is privy to, the less can be passed to the government.

    Edward Snowden, the former computer technician at NSA who leaked documents belonging to the agency, has said that “properly implemented strong crypto systems are one of the few things that you can rely on.” Weak encryption will be easily infiltrated by the NSA, but stronger encryption is still out of its reach.

    It has been suggested that regular users shouldn't be concerned about NSA infiltration since they aren't engaging in suspicious activity.  However, there is reason to be extra-vigilant: NSA's activities may have weakened overall internet security, making their back door strategies available to technologically advanced criminals as well as to government agencies.  The persistent question of “is my data secure in the cloud?” has been answered clearly:  data is only as secure as you make it. 

    And to make data secure in the cloud, you must use strong cloud encryption.

    In response to the NSA news, businesses must transcend the way they have been thinking about their data in the cloud and how to secure it.  One of the strongest encryption technologies, split-key and homomorphic key encryption, makes it impossible for hackers and internal staff to get access to data they shouldn’t have access to.  Split-key encryption creates two unique keys.  To unlock the encryption, both keys are required. One of those keys stays in the hands of the customer at all times and it ensures that private data remains private. The master key is known only to the application owner and is encrypted when in use in the cloud, so even if it is stolen, it cannot be used to hack into data. This solution also avoids the usual homomorphic encryption lack of speed. With split-key encryption, applications maintain their regular speed, running quickly and securely.

    Encryption works, and when implemented correctly, can secure your cloud data.  You can also take additional steps to reduce your exposure from attack.

    In conclusion, the NSA is powerful: they watch, they listen, they collect data.  In cases of national security, perhaps this is a good method to catch terrorists.  In cases of private business data, there is a way to block the NSA from getting to your sensitive information: strong data encryption.

    Copyright 2010 Respective Author at Infosec Island]]>
    OpenSSL “Heartbleed” – Whose Vulnerable and How to Check Thu, 10 Apr 2014 15:09:00 -0500 The internet is plastered with news about the OpenSSL heartbeat “Heartbleed” (CVE-2014-0160) vulnerability that some say effects up to 2/3 of the internet. Everything from servers to routers to smart phones could be tricked to give up encrypted data in plain text. Let’s take a quick look at the vulnerability, see who’s affected by it and how you can check.

    What is Heartbleed?

    Basically, OpenSSL is an encryption library used in HTTPS communication – You know the online stores and banking websites that give you that little lock icon in your browser bar when you visit them.

    OpenSSL uses a “heartbeat” message to echo back data to verify what was received was correct. In OpenSSL 1.0.1 to 1.0.1f, a hacker can trick OpenSSL by sending a single byte of information but telling the server that it sent 64K bytes of data.

    And the server will respond with 64K bytes of information – from it’s memory!

    The Register has a nice image of the process:

    OpenSSL heartbleed

    The data returned is randomly pulled from the server’s memory and can include anything from Usernames, account passwords or sensitive data.

    The vulnerability is remedied in the latest update of OpenSSL, but the problem is it could take years for all the affected devices to be found and patched. And some embedded and proprietary devices may never be patched!

    There are a plethora of tools and exploits flooding the internet right now to check for and exploit Heartbleed.

    Who is Vulnerable?

    Yesterday the top 10,000 websites on the web were scanned for the vulnerability and the results can be found here. Many big named websites (as of yesterday) are vulnerable. But many listed, including Yahoo! have already fixed the vulnerability.

    But if you read down the list you will see familiar websites including technology sites, financial institutions, game websites and popular forum/ social media sites.

    But it just not limited to these sites.

    Many home routers and even smart devices use OpenSSL.

    How to Exploit/ Check?

    I received a note today from Tenable that Nessus will now detect the Heartbeat vulnerability:

    “Tenable Network Security® released plugins for the detection of the OpenSSL heartbeat vulnerability (aka the “Heartbleed Vulnerability”) on the 8th of April for Nessus® and the Passive Vulnerability Scanner™ (PVS™). A plugin for detecting the vulnerability in Apache web server logs has also been added to the Log Correlation Engine™ (LCE™) and available for reporting in SecurityCenter™ and SecurityCenter Continuous View™.”

    And a quick Google search will return multiple different ways to check to see if websites are vulnerable to the attack. I have even seen a Firefox add in floating around:



    There are a couple exploit programs available on the web. Rapid7 has created an exploit module for Metasploit and it is available on Github:

    heartbleed ruby

    I didn’t see it available in the latest msfupdate, but I am sure it will be added to Metasploit Framework very soon.

    As always, use any Heartbleed tools at your own risk, use extreme caution when using random programs to check for vulnerabilities, and never use these tools to check websites that you do not own or have permission to test or to access.

    Update any of your systems that are using the old version of OpenSSL, and change your passwords on any effected servers.

    This was cross-posted from the Cyber Arms blog. 

    Copyright 2010 Respective Author at Infosec Island]]>
    OpenSSL Problem is HUGE – PAY ATTENTION Thu, 10 Apr 2014 11:11:11 -0500 If you use OpenSSL anywhere, or use a product that does (and that’s a LOT of products), you need to understand that a critical vulnerability has been released, along with a variety of tools and exploit code to take advantage of the issue.

    The attack allows an attacker to remotely tamper with OpenSSL implementations to dump PLAIN TEXT secrets, passwords, encryption keys, certificates, etc. They can then use this information against you.

    You can read more about the vulnerability itself here. 

    THIS IS A SERIOUS ISSUE. Literally, and without exaggeration, the early estimates on this issue are that 90%+ of major web sites and software packages using OpenSSL as a base are vulnerable. This includes HTTPS implementations, many mail server implementations, chat systems, ICS/SCADA devices, SSL VPNs, many embedded devices, etc. The lifetime of this issue is likely to be long and miserable.

    Those things that can be patched and upgraded should be done as quickly as possible. Vendors are working on patching their implementations and products, so a lot of updates and patches will be forthcoming in the next few days to weeks. For many sites, patching has already begun, and you might notice a lot of new certificates for sites around the web.

    Our best advice at this point is to patch your stuff as quickly as possible. It is also advisable to change any passwords, certificates or credentials that may have been impacted – including on personal sites like banking, forums, Twitter, Facebook, etc. If you aren’t using unique passwords for every site along with a password vault, now is the time to step up. Additionally, this is a good time to implement or enable multi-factor authentication for all accounts where it is possible. These steps will help minimize future attacks and compromises, including fall out from this vulnerability.

    Please, socialize this message. All Internet users need to be aware of the problem and the mitigations needed, even for personal safety online.

    As always, thanks for reading, and if you have any questions about the issues, please let us know. We are here to help!

    This was cross-posted from MSI's State of Security blog.


    Copyright 2010 Respective Author at Infosec Island]]>
    Windows XP End of Life: What Your Organization Can Expect Wed, 09 Apr 2014 13:55:00 -0500 We all know the significance of the 15th day of April. Each year the queues at the post office stretch outside the building. Each year millions of taxpayers file late.  Very often the reasons for this are dubious:  laziness, forgetfulness, confusion, protest and general procrastination. An interesting parallel in technology is the end-of-life (EOL) of the Microsoft operating system Windows XP arriving one week prior to tax day on April 8th, 2014. 

    Microsoft stopped selling retail copies of XP June 30, 2008, pre-installing it on PCs on October 22, 2010, and ending mainstream support five years ago on April 14, 2009.  Yet as late as June 2013, third party analytics firm Net Applications found that nearly 38 percent of all users were still utilizing XP, more than every other non-Win7 OS combined.  The XP EOL means that no further security automatic fixes, updates, or support are available.  On the same date other popular software packages Access 2003, Excel 2003, Exchange Server 2003, Office 2003, PowerPoint 2003, Project 2003, Publisher 2003, SharePoint 2003, Visio 2003, and Word 2003 will also go EOL.

    As a security professional this is a big deal.  For an example of what your organization may expect with the EOL of XP overall, examine the rise in infection rates after the XP Service Pack 2 (SP2) EOL, from a Microsoft blog, ‘Infection rates and end of support for Windows XP’:

    XP SP2 infection rates

    As you can see in the chart above, there was an unplanned spike in the infection rate on machines where the organization had failed to upgrade to SP3 prior to the EOL date of SP2.  This translates directly into unplanned expenditures to rebuild infected machines and lost productivity but could also have losses related to intellectual property or financial loss, or damage to an organization’s brand.

    Is your organization more like the orderly taxpayer driving by the post office on the 15th and scoffing?  Or are you a little more unnerved because some portion of your estate is stuck on XP or popular 2003 servers or applications for reasons beyond your control:

    • Lack of budget for the capital expenditures of new machines
    • Lack of budget for the human capital to perform the migration
    •  Lack of expertise to take a stable platform to one that’s considered more risk due to lack of deployment
    • A custom solution was developed for a legacy OS or in conjunction with a legacy software product
    • In the realm of organizational priorities the XP or 2003 server migration didn’t make the cut.

    Short Term Pain Relief – Application Whitelisting

    Microsoft XP and many of the EOL software packages are incredibly stable, having over a decade of software development and a history of reliability over the last few years. It is the current maturity that makes migration from XP or Office 2003 difficult, but that same stability that organizations came to rely upon will begin to erode immediately.

    One of the alternatives that reduces the risk of temporarily staying on these older software packages is the use of a system to lock down the functions that are able to run, also known as dynamic or application whitelisting (AWL).  AWL allows an organization to take a functioning system and ensure that only business or mission supporting functions are able to run – that which is not specifically permitted by the administrator is denied.  So although there may be no fixes to alleviate a vulnerable or exploited condition, potential malware or hacked executables can’t get CPU time or the shared memory required to run.

    Near Term Benefits:

    • Mitigate risk of continued use of EOL XP operating system or popular client and server packages such as Outlook/Exchange 2003
    • Ensures that new malware introduced onto the system can’t run
    • Ensures that business or mission supporting programs that are successfully attacked can’t run
    • Enables organization to create a controlled migration to supported software packages rather than a reactive, unplanned effort

    Use Your Air Cover Wisely – Planned Migration

    While the natural course of action might seem to be a simple software or OS upgrade, the impending EOL event should be used as an opportunity to evolve your organization toward a more cost-effective, more secure future.  After utilizing AWL as described above to create a reduced risk cushion for your organization, consider the options below in order to maximize your flexibility going forward:

    • Make a capital expenditure (CAPEX) – it may be that the systems running your legacy XP OS and server applications are out of warranty as well.  AWL allows you to schedule a CAPEX purchase in line with your budgetary cycle rather than as an out-of-band financial expense.  The new hardware will not only be able to capably run the supported OS and applications, it’s also likely that you can get them bundled together from a variety of manufacturers
    • Determine the best way to provision desktop services and servers for the future – Potentially one of the methods listed below would make more sense for your organization going forward:

    o   VDI – Potentially the hardware you have now is still sufficient to run remote desktop services in conjunction with server based computing

    o   Cloud – There are a variety of carriers, managed service providers, and industry mammoths such as Google, Microsoft, and Amazon who are delivering a variety of technologies with no on-premise infrastructure costs

    o   On-premise, supported operating systems and packages – For suitable hardware, Windows 8 and Windows Server 2012 for example don’t have extended EOL dates until 2023, an incredibly long window to ensure long-term stability for your organization.

    Whether you want to acknowledge it or not, the XP and 2003 applications and servers in your estate are going to be at significant risk in a few weeks.  But rather than make a long-standing problem worse, consider the use of application whitelisting during an interim planning period.  This will allow you to ensure high security, while you make balanced considerations and execute against a plan based on the best path forward for your organization.

    About the Author: Scott Montgomery is vice president and chief technology officer of public sector at McAfee. He runs worldwide government certification efforts and works with industry and government thought leaders and worldwide public sector customers to ensure that technology, standards, and implementations meet information security and privacy challenges. His dialog with the market helps him drive government and cybersecurity requirements into McAfee’s products and services portfolio and guide McAfee’s policy strategy for the public sector, critical infrastructure, and threat intelligence.  

    Related ReadingExperts Warn of Attackers Hoarding Windows XP 'Forever Days'

    Copyright 2010 Respective Author at Infosec Island]]>
    Heartbleed Should Give You Cardiac Arrest Wed, 09 Apr 2014 10:59:05 -0500 When we look at concerning security issues, there are always considerations such as how long a vulnerability has existed before it’s been discovered, how pervasive it is or how likely it is to affect a large population of systems, processes, and users, and also how much damage it could do if exploited. If these are combined, you have the trifecta of grave concern in the security community on the “Heartbleed” vulnerability, publicly announced April 7, 2014.

    How bad is it? Estimates are over 66% of active websites on the internet may be vulnerable to this bug, found in OpenSSL, an open source cryptographic library used in the Apache web server and ignx when creating communications with users. How much damage can it do if exploited? Think big. Think ‘keys to the kingdom’ big. And how do you know if you’ve been exploited? You don’t – assume you may have. This is definitely run don’t walk material for security professionals.

    OpenSSL is used every day in apps, websites, government sites, and even used to  transmit encrypted data such as credit card information, passwords, user IDs, etc. This PPI may be leaked from server memory where it’s commonly stored for operations and unfortunately can be exploited through the Heartbleed bug, including security keys used for encryption and decryption of the information.

    Furthermore OpenSSL is used to protect for example email servers (SMTP, POP and IMAP protocols), chat servers (XMPP protocol), virtual private networks (SSL VPNs), network appliances and wide variety of client side software.  OpenSSL is very popular in client software and somewhat popular in networked appliances which have most inertia in getting updates. Fortunately many large consumer sites could be saved by a conservative choice to use SSL/TLS termination equipment and software.

    The exploit relies on a bug in the implementation of OpenSSL’s “heartbeat” feature, hence the “Heartbleed” name (CVE-2014-0160). Security researchers at the firm Codenomicon and Neel Mehta of Google security have discovered reported it to the OpenSSL team. Codenomicon has written an in-depth breakdown of their experience:

    “We have tested some of our own services from attacker’s perspective. We attacked ourselves from outside, without leaving a trace. Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.”

    Heartbleed is not just bad, it’s very, very bad. The bug has been in OpenSSL since December 2011, (OpenSSL versions 1.0.1 through 1.0.1f) – so it’s safe to assume that others have found it and it’s reasonable to assume that it has been exploited by the hacker community for some time. Even worse, it appears that exploiting this bug leaves no trace in the server’s logs. This means that there’s no easy way for a system administrator to know if their servers have been compromised; they just have to assume that they have been.

    Here are a few examples of how this exploit could have been used in your environment:

    • An attacker can (and possibly already has been) accessing your site/server system’s memory (albeit in 64-byte chunks) and gathering the secret keys used to encrypt and decrypt communications. This means sensitive data would be read just like open text by an attacker – as if no encryption existed at all.
    • Once an attacker has the keys they can also mimic a secure website or server, and essentially overcome any browser-built security checks your system may have in place.
    • Once the attacker has the keys, they could gather petabytes of encrypted data and easily decrypt it.

    Run, don’t walk, to get the information you may need for your environment. OpenSSL released an emergency patch for the bug along with a Security Advisory on April 7, 2014. You should consider applying this patch immediately if you’re using the Apache web server or ignx and OpenSSL. Refer for useful details you can use.

    Fixed OpenSSL has been released and now it has to be deployed. Operating system vendors and distribution, appliance vendors, independent software vendors have to adopt the fix and notify their users. Service providers and users have to install the fix as it becomes available for the operating systems, networked appliances and software they use.

    This was cross-posted from Tripwire's The State of Security blog.

    Copyright 2010 Respective Author at Infosec Island]]>
    IAM Proofs of Concept (POC) - An Inefficient Use of Time and Money Tue, 08 Apr 2014 13:17:54 -0500 By: Luis Almeida

    The Proof of Concept (POC) is a traditional step in the typical organization’s product evaluation process. Most organizations use a POC to evaluate their product options by seeing how different products perform in their environment. The ultimate objective is to mitigate the risk of a purchase by ensuring that the product is a fit for the organization. Although almost always a part of the product and vendor evaluation process, POCs should be reconsidered because they are costly and deliver very limited value.

    Identity and Access Management is as much about process as it is technology. POCs are almost always carried out by vendor pre-sales teams that are trained on their products but have limited ability to help their customers think through the identity challenge and prepare for a real world implementation. At some point in the vendors sales process the pre-sales team is forced to hand over responsibility to their services organization that will design the solution that meets the customer’s objectives. This handoff of responsibility is one of the most tricky and delicate stages of the product evaluation. Teams involved with evaluating solutions are often forced to begin anew describing their business drivers and requirements. It is only at this stage, usually after a POC has occurred, that hidden complexities begin to arise.

    Ideally, the product evaluation process should be supported by a team of delivery architects. It is the delivery architects that will own the implementation and guarantee an organization's success. Service delivery architects are responsible for integrating the solution in their customer’s environment and delivering on the promise of IAM / IAG. When services teams are enaged during the evaluation phase much of the work that is done evaluating products can be leveraged in the real world.

    Hidden ComplexityPre-sales teams, those usually engaged in POC’s, are sales resources. It is their job to “prove” that the solution will work. In many cases these teams do whatever they can to hide complexity and to deliver use cases in ways that would never be acceptable in production. Despite an organization's best efforts to leverage the POC as a knowledge transfer opportunity, this seldom happens. IT teams are spread thin and rarely are in a position to dedicate their full attention to a POC that lasts a week, often times more. 

    There is a significant opportunity to improve the product evaluation process in a way that allows for the evaluation effort to be leveraged in production. The opportunity is to engage a services partner that will put an organization's drivers and objectives above all else. Services partners, such as Identropy, have in-depth and real world knowledge of multiple IAM solutions. A services partner can help a customer narrow down their product evaluation to two or three solutions that are the best fit. An unbiased, customer focused service provider can then help the customer by helping them see both the pros and cons of the solutions they are evaluating. This evaluation becomes a knowledge transfer process. The service provider can allow the customer to participate in the customization of product demonstrations and eventually in a pilot implementation that will serve as so much more than a mere POC. By taking into consideration an organization's business processes and helping a customer prioritize real world use cases, the service provider can deliver the pilot in a form that can be leveraged in production. During this process, a service provider can make sure that the organization understands the advantages, and more importantly, the drawbacks of each solution. The result is a much more informed buyer that is in a position to leverage their evaluation process to deliver real world value.

    Let Us Be Your GuideIf POCs were an effective means to evaluate IAM solutions, there would not be so many failed implementations in the market. By leveraging a neutral, unbiased service provider, organizations can significantly mitigate their risk, increase their success rate and put their resources to much better use.Engaging a service provider during the product evaluation stage and allowing them to unmask the complexities of IAM solutions is like hiring a sherpa prior to trekking in the Himalayas. Looking for a guide once one is already down a particular path is not nearly as efficient as setting off with one in the first place. Let us help you before your IAM journey begins. We know the landscape, the pitfalls and the challenges. Let us be your guide.

    Learn more by visiting our Anti-POC web page or by downloading the data sheet today.

    This was cross-posted from the Identropy blog. 

    Copyright 2010 Respective Author at Infosec Island]]>
    Experts Warn of Attackers Hoarding Windows XP 'Forever Days' Tue, 08 Apr 2014 12:57:56 -0500 The reminders and warnings have been relentless for the past two-and-a-half years. Microsoft will "end of life" Windows XP, but there are significant numbers of computers and specialty devices still running the 13-year old operating system, exposing them to serious security issues down the road.

    Microsoft officially ends support on Tuesday, April 8 by releasing the last security updates for Windows XP and Office 2003 as part of the April Patch Tuesday release. Any issues discovered afterwards in XP or other Microsoft products running on XP, either as a zero-day vulnerability being exploited in an attack or reported to Microsoft by a security researcher, will not be fixed. Security experts believe criminals are hoarding XP vulnerabilities with plans to launch campaigns exploiting them at a later date, since those zero days will become "forever days."

    Read the Full Article at SecurityWeek

    Copyright 2010 Respective Author at Infosec Island]]>