Defining the Insider Threat

Sunday, April 17, 2011

Danny Lieberman

959779642e6e758563e80b5d83150a9f

One of the biggest problems facing organizations is lack of rigorous definitions for trusted insider threats, data loss and how to estimate potential damage from a data loss event.

With a lack of rigorous definitions for data loss and trusted insider threats, it’s hard to benchmark with other companies and difficult to select a good set of data security countermeasures.

Referring to work done by Bishop – “Defining the trusted insider threat

An insider can be defined with regard to two primitive actions:

  • Violation of a security policy using legitimate access, and
  • Violation of an access control policy by obtaining unauthorized access.

Bishop bases his definition on the notion  ”...that a security policy is represented by the access control rules employed by an organization.”

It is enough to take a glancing view at the ISO 27001 information security management standard to realize that a security policy is much more than a set of access control rules.

Security policy includes people policies and procedures, good hiring practices, acceptable usage policies backed up by top management commitment to data governance,audit, robust outbound data security monitoring (or what is often called “DLP Light”) and incident response.

Information security management is based on asset valuation, measuring performance with security metrics and implementing the right, cost-effective portfolio of security countermeasures.

A definition of trusted insider threats  that is based on access control is therefore necessarily limited.

I would offer a more general definition of a trusted insider threat:

Any attack launched from inside the network by an employee, contractor or visitor that damages or leaks valuable assets by exploiting means (multiple accounts) and opportunity (multiple channels).

Using this definition, we can see that trusted insider threats is a matter of asset value and threat surface – not just access control:

  • For example, employees in an organization that crunches numbers of weather statistics have nothing to gain by leaking crunched data – since the assets have no intrinsic value.
  • For example, employee tendency to click on Microsoft Office documents can turn them into a trusted insider threat regardless of the access controls the organization deploys – as RSA learned recently.

RSA was hacked in the beginning of March 2011 when an employee was spear phished and opened an infected spreadsheet. As soon as the spreadsheet was opened, an advanced persistent threat (APT) — a backdoor Trojan — called Poison Ivy was installed. The attackers then gained free access into RSA’s internal network, with the objective of disclosing data related to RSA’s two-factor authenticators.

RSA is a big company with a big threat surface, lots of assets to attack and lots of employees to exploit.

The attack is similar to APTs used in the China vs. Google attacks from last year. Uri Rivner, the head of new technologies at RSA is quick to point out that that other big companies are being attacked, too:

“The number of enterprises hit by APTs grows by the month; and the range of APT targets includes just about every industry.Unofficial tallies number dozens of mega corporations attacked [...] These companies deploy any imaginable combination of state-of-the-art perimeter and end-point security controls, and use all imaginable combinations of security operations and security controls. Yet still the determined attackers find their way in.”

Mitigating the trusted insider threat requires first of all defining whether or not there IS a threat and if so – finding the right security countermeasures to mitigate the risk.  One wonders whether or not RSA eats their own dog food and had deployed a data loss prevention system. Apparently not.

Cross-posted from Israeli Software

Possibly Related Articles:
9858
Enterprise Security
Information Security
RSA Insider Threats Data Loss Prevention ISO 27001 Advanced Persistent Threats IDS/IPS
Post Rating I Like this!
3c66e7e9308d6d674f331fb1d4507c4d
Franc Schiphorst CEBKAC Challange exists between keyboard and chair ;)
People will have access to assets that can be turned agains the business when put out of context (having certain files "found" in the garbage by a newspaper eg)
And they (both people and tech assets) can be abused like in the RSA/Google/... cases.
The phisers only need to be right once, we need to be right all the time.
Tech is always behind 0day exploits and i've see promisses of 100% heuritic bla bla protection sinds the advent of the 1.44 floppy.
So best defence is still people. Learn people to recognise "too good to be true" but also the "hmmm somethings not ok but don't know what" and then make sure YOUR door is open there and then "he tech guy what do you think"
And also try to keep the scale on a human level so people can cover each other because they know each other.

On the data loss prevention (at RSA). That will be hard in a big environment where you are shipping secret, encrypted, stuff all over the world.
It will be very hard to pick out a dodgy transaction. It will be impossible if a regular customer, with lower levels of protection, gets hacked as well and that customer is used to send the encrypted paydirt to so it can be forwarded to the final destination.
1303107556
959779642e6e758563e80b5d83150a9f
Danny Lieberman Franc

Well theoretically, the best defense is people who think but consider that if RSA HR staffers can be spear-phished, it indicates that security awareness programs are probably not very effective.

In an APT attack, attackers take advantage of trust relationships in a slow, low-profile campaign, quite different from malware and DDOS attacks.

An APT attack requires the attacker to succeed in a series of operations - reconnaissance, weaponization, delivery, exploit, install, command and control and finally to execute the mission - steal data or disrupt operations. The attacker needs to succeed at every step. The defender can block the attack by blocking a single step.

Even if all the RSA internal security controls failed (which apparently they did...), using DLP at the perimeter to prevent leakage of SecureID assets could have foiled the final step of mission execution.

As a data security practioner who has implemented DLP at big and large organizations - I can assure you that there are a number of DLP technologies that can handle the above mission.

Since RSA may or may not eat their own DLP dog food, it's hard to known if they even attempted such a line of perimeter defense.

What is certain is that, as I pointed out in my article, the decision to deploy network DLP is a function of asset valuation.

If RSA had valuated their SecureID keys at a high enough value and performed a threat analysis that calculated Value at Risk - they might have drawn a conclusion that network DLP was a cost-effective countermeasure.

Then again - since SecureID is two factor authentication and by definition, customer keys were not breached, perhaps RSA calculation of VaR lead them to the opposite conclusion, namely that the damage of a breach was not high enough to justify deploying more advanced data security countermeasures at the perimeter that could have prevented the breach.

In either case - understanding asset value (and not people vulnerabilities) is key to deploying the correct data security countermeasures.

Danny
1303110105
3c66e7e9308d6d674f331fb1d4507c4d
Franc Schiphorst :) learning something every day.

But the conclusion then has to be they do not value their reputation? ;)

On the steps required you probably have to scratch (most of) reconnaissance as outside sources like linked-in, facebook, twitter will be un-monitored.
With that info it probably does not even take a fake call to the secretary.

And if you scramble the assets enough will they still be recorgnised as SecureID assets. Or just one of the many encrypted payloads send to the customer.

We face the same problem the police has. Everything gets encrypted so what's inside?

But that's probably my lack of experience in DLP tech.
1303110894
959779642e6e758563e80b5d83150a9f
Danny Lieberman Franc

I don't have insight into how RSA value their reputation but any sized business should be doing threat-focused risk reduction.

Risk = Asset value * Vulnerability * (opportunity, intent, capability)

Any 6th grader will tell you that you cannot calculate X*Y*Z without knowing the values of all 3 variables, yet strangely enough the entire security industry seems to be focused on 1 variable - and spend most of their security budgets on reducing vulnerability with firewalls, access control, IPS etc..without looking at the other 2.

To mitigate APT attacks (or any data security threat for that matter) you need to work on all 3 variables
1) understand asset value (if it's reputation - perhaps RSA is secure in their reputation and figure that notoriety is a form of cheap PR)

2) Erode capability, understand intent and increase the opportunity cost for the attacker.

No security countermeasure is a silver bullet but with hashes, filenames, detection of strong encryption and authorized channels it's not too hard to make a DLP policy to detect a breach of assets like SecureID keys. This is just ONE way of raising opportunity cost

Without knowing more about the RSA network - that's about what I can say.....

D
1303126604
959779642e6e758563e80b5d83150a9f
Danny Lieberman Regarding reconnaissance of social media like FB and LI.

Search engine tools in social media can be used by both defenders and attackers.

Suppose we hypothesize that an attacker has intent to steal something from us. Search for your employees on twitter and FB and build up a social graph. If you see that your HR employees are very visible on FB, couple that with your knowledge of attacker intent and you have just translated the social graph into a threat surface.

If the threat surface is big enough - it's time to talk to your HR employees, tighten up their network and change some passwords
1303127721
595640009b9ff10ec4d781330e3a9a40
Don Turnblade So, we come back to the view that InfoSec, to do its job, has to have a map of the data flows of sensitive data. It has to know where the electronic money is. It has to know where the Intellectual Property is.

Simply starting at a simplistic model, value of the data time threat vectors against that data.

Taking it to the next level, cost of damaging attack times damaging attacks per year.

So, then we face the vacuum, where do we get that data. Also, we face the chicken and egg problem, we can collect data on risks we know but if we do not know its a risk, we have no sensors to measure the justification to buy sensors.

So, all models are wrong, but some are useful.
Start here.

Professionals blunder at the rate of about 0.33% per year. 95% of the time they blunder less often than 0.54%. There blunder rates are never zero.

Lets use the following model of attack rate.
Attacks/staff per year
Low = 0%
High = 0.54%
Typical = 0.33%

Human interview data tends to under estimate statistics. The following rule is useful.

Average = .185 * High + .63 * Typical + .185 * Low
Sigma = sqrt(.185*(High-Ave)^2+.63*(Typical-Ave)^2+.185*(Low-Ave)^2)

But, we might say, this plan of interviewing experts in the company to get high, low and typical counts of revenue, revenue/record, revenue/computer, revenue/database, GB_Data/Database, New Records/yr, Revenue/GB_online_Database_Info...yada, yada, yada.

In the end, a back of the envelope model provides the first cost justification for the sensor and first cut protection by computing a rough Return On Security Investment.

Then, when the sensors go in, better numbers on attack frequency come in.

Now we start talking to Anti-Fraud, Internal Audit (Whoa...there is a reason these people exist?!) and legal. From these, we can start getting actual breach impact numbers: High, Low and Typical...


Then, we really can have numbers like
Dollars of Damaging Attack Impact * Damaging Attacks per Year = Dollars / Year expected Damage.

Then we can put in uncertainty.... ask your local Six Sigma Blackbelt if the statistics is a problem.... or just solve the math...

Risk Sigma - Risk = (Impact +/- Si) * (Frequency +/- Sf) - Impact * Frequency.

Then, we can compute the average attack and natural fuzz band uncertainty around it. We can use statistics to estimate the amount of corporate insurance one might want to buy. Whether it is cost effective to control the risk in house,... yada, yada, yada...

To be simple,
How many Staff do you have?
How many Revenue Producing Records do you Have?
How much Profit do you make per year?

From these, cost effective numbers of Dollars at Risk per year can get computed.

Pretending that it is not InfoSec's job to do Business Impact Analysis computations, even if they are back of the envelope prototypes for the business is like a royal waste.

Just turn in your CISSP because you refuse to due Domain 3, Business Impact Analysis because you do not believe that the value to the business is the driver that justifies technical security.

Or, take some notes. Put some of this in a spreadsheet and interview people outside the walls of InfoSec. Really, they are worth talking too.

By now, you are likely so steamed with me that you would never bother to think of me as a resource. But, I have been drilling on the dollars and cents of InfoSec good sense for about five years now.

I can save you years of time if you wanted the help. For now, the best I can do is give you a swift kick in the .... yada... yada and yada...

1303246030
595640009b9ff10ec4d781330e3a9a40
Don Turnblade Think of Business Impact Analysis as real InfoSec. When we do, we compute dollars/year in risk exposure. Cents of Security per Business Transaction, the kind of stuff that justifies change in Web Application Security to have a better budget than the free coffee at work gets.

When InfoSec has a ROI of 254% in as shown in a Ponemon Study, the odds that your Return On Investment will be huge compared to your competing business units is going to be good.

Saying that we have 10 ways for every staffer to bankrupt the company but encrypting a hard drive reduces that by half was not so hard. You can do it.
1303246793
959779642e6e758563e80b5d83150a9f
Danny Lieberman Don

I agree. Let me share our experience with Practical Threat Analysis over the past 5 years. I am a big proponent of BIA and quantifying risk with threats and I think we have had some great success stories. Not all of which can be told.

1) Every line of business manager we have ever met understands the notion of threats, assets, vulnerabilities and countermeasures immediately.
2) Most IT people deny the notion of estimating asset value
3) Every CFO will argue the asset valuation question but peg a number in a few minutes,
4) Good CIOs realize that a threat model is approximation - and a good model will have 70% accuracy with only 5-10% of the work involved in detailed measurements.

You are correct in your iterative approach - using practical threat modeling to get a first cut that justifies 2-3 countermeasures and using your inbound/outbound data security events to calibrate and improve the model.

Regarding Ponemon - I would discount most of their numbers since their numbers have dubious research methods and are paid for by vendors.
1303283052
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.