At this year’s PCI Community Meeting there was a great presentation done by Verizon Business Services on their 2011 Data Breach Investigations Report.
However, one of the things that concerned me about their presentation is that they seemed to downplay the threat insiders pose to the breach of information.
So, I went back and reread their report because I did not get that same interpretation when I originally read the report. On page 42 of the Verizon report, there is a discussion of “Errors.” Errors are defined “as anything done (or left undone) incorrectly or inadvertently.”
According to this section, there were 219 instances out of the total 761 breaches where insiders contributed to the breach. That computes to almost 30% of all breaches. That is almost twice the 17% quoted as a highlight in the front of the report and used to justify the downplaying of the insider threat. So the insider threat is still substantial and should not be ignored.
The biggest problem with the insider threat is that it does not matter how much technology you have in place to protect your information assets as it only takes one person in the right place to neutralize every last bit of your high-tech security solutions.
Just ask anyone at any of the recently breached organizations how all of their technology functioned when they suffered their breach. I am sure they will tell you the technology worked just great – not so much for their people.
First, there are the mistakes everyone makes that are done in the name of efficiency or politeness.
- The sharing of a manager password to expedite a process in the name of good customer service.
- The holding open of a door to a secure facility because someone’s arms are full.
- The swiping of your access card and letting others tailgate through a secured door to be polite.
At the end of the day, we all are guilty at one time or another of doing these things as well as many other bad security acts.
That is the real problem we all face and why security standards focus so much on a layered approach also known as defense in depth. The hope is that with multiple layers in place, even if one or two layers become non-functional due to the people issues, another layer will stop or at least detect the issue and the issue will be averted or minimized.
However, in most cases, if someone can get the right software onto the right user’s computer, it really does not matter what security is in place.
The first example of such a breach was of RSA back in March 2011. As the story goes, through the use of electronic mail, attackers targeted RSA network and system administrators with messages containing an Excel spreadsheet as an attachment. The spreadsheet contained backdoor software that was surreptitiously installed on the computer providing the attackers with remote access into RSA’s network.
With remote access, the attackers are free to scope out the RSA network at their leisure. Over time, the attackers obtain the code for RSA’s SecurID servers and FOBs. Once discovered, RSA is forced to replace SecurID FOBs for free to their customers.
Right on the heels of the RSA breach was the breach of Epsilon in April 2011. Epsilon was a quiet firm that did the electronic mail marketing and loyalty programs for such businesses as Best Buy, Kroger, Marriott and LL Bean. It creates the biggest opportunity for spear fishing ever seen. Based on news reports, Epsilon was attacked in a similar fashion as RSA although the two breaches have never been linked to the same attackers by authorities.
In May 2011, the apparent fruits of the RSA breach were unleashed on Lockheed Martin and rumored to have also been unleashed on Northrup Grumman and L3 Communications. Using fake FOBs, the attackers broke into Lockheed Martin and attempted to gain information from Lockheed Martin’s systems. News reports indicated that the Lockheed Martin attack was eventually repelled and no information was obtained by the attackers.
Citigroup suffered a double whammy of bad news in June 2011. The first hit was the admission that their online banking site had been compromised and more than 350,000 customer accounts had their information leaked to the attackers. Then on June 26, a former Citigroup executive was arrested for embezzling $19 million while they worked in the treasury finance department. The first event was front page news in the papers and on the Internet. The second event barely made a notice.
The reason I bring these breaches up is that while these computer attacks are big news, they point to the fact that the bigger threat is actually the people you employ. The reason why attackers targeted these companies’ employees is that insiders have direct access to information and, in most cases therefore, that attackers do not need to hack any computer systems to gain the information. This is bore out in the fact that security survey after survey keeps confirming that the vast majority of compromises are the result of some amount of insider involvement.
All organizations are at significant risk to the insider threat because most have done little or nothing to prevent it. Sarbanes Oxley and the like have done no one a favor in propagating this view of controls. This is why I think a lot of organizations push back on complying with the PCI DSS, they abhor controls and want nothing to do with controls of any type. You hear arguments regarding the “stifling of creativity” and “make do work” which are nothing more than whining from people that have no clue as to why controls are needed.
I have documented in a previous post the three phases of a well structured control environment, so I will leave readers to review that for reference. Properly designed and implemented, internal controls make a big difference because if people know that someone is looking over their shoulder to ensure that their job is done properly, that in and of itself goes a long way to keeping everyone honest as well as minimizing operational errors.
However, just because you have controls does not mean that everything will go smoothly. A lot of organizations have a great control environment on paper, but do very little to ensure that it is executed as written. We see time and again organizations that talk a good game, but when you start looking at their operations, the control environment is a paper tiger because no one is enforcing those controls and the controls are being followed haphazardly, if at all.
Then there are the organizations that have never reviewed and streamlined their controls since the founding of the organization. These sorts of companies have created a controls monster. What has happened is that as the business encountered a new issue, a new control was placed on top of other controls to address the new threat. Over the years, all of these controls now keep a huge bureaucracy busy doing nothing but making sure that controls are controlled.
While some of this problem can be laid at the feet of management, it is not entirely all their fault. Custom and packaged application developers have only recently started implementing security and controls into their software that allow the level of granularity necessary to meet SOX, PCI and other security requirements. In most cases, security has always existed in applications, just not at the level necessary to properly ensure the security of sensitive information in today’s environment.
The lesson here is to ensure you have controls in place to ensure the security of your sensitive information. If your applications that process, store or transmit sensitive information do not have the capability to properly protect your sensitive information, then you need to create manual controls to fill in any gaps. Those manual controls need to have the ability to provide feedback so that if they begin to fail, someone is alerted to that fact and can step in and get the controls functioning again.
Cross-posted from PCI Guru