Better Security Through Sacrificing Maidens

Wednesday, September 15, 2010

Pete Herzog


I began this as an an answer to some questions but then I realized I will never successfully explain the OSSTMM 3, security metrics as the ravs, and trust metrics if I only answer the questions asked. I need to address this properly by explaining the background as well because the OSSTMM 3 is apparently very different from what most people expect out of a professional security model or what they even think security is.

I think the problem people have with the OSSTMM 3 is that they expect that some things are required or necessary in security and they just don't find it there. They think estimating attack frequency, attack types, and vulnerability impact are all needed to properly and successfully defend themselves. But those things aren't used in the OSSTMM (except in very special cases of physical and wireless security verification testing) to build "good enough" security. This leads people to think it's missing or wrong.

Now we all see people who say that security is about the process and we see them fighting a losing battle. We see them just do more of what they're being told to do by the compliance requirements, books, and blogs and it's not working or it's not scaling. The problem is we are being taught to build defenses like consumers and it isn't working.

That's why we took a different direction with the OSSTMM 3. If we keep doing what we know doesn't work even "good enough", why keep doing it? It wasn't until we accepted that there are things we can never reliably know that we knew we had better find the limits to that which we did know. So then at least we'd have that going for us. For example we know that we can't reliably determine the impact of a particular vulnerability for everyone in some big database of vulnerabilties because it will always depend on the means of interactions and the functioning controls of the target being attacked. But we do know how a particular vulnerability works and where. Which means we needed a way to categorize and rate vulnerabilities not on some arbitrary weight of potential impact but rather on what they do. Then anyone who has defenses where operations match where the vulnerability is and are missing the controls which would contain or stop the vulnerability then we would really truly know if there would be an impact greater than zero for them. Therefore by focusing on operations, we can devise tactics to respond to them.

Next we realized we had to look for the security particle. What can we use to make security? Where is the security equivalent of materials science? How can we reliably build a strong defense if we don't know what it even means (or more interestingly, how the hell are we selling it if we don't know what it means). So we needed to do some serious fact finding. We needed ground rules that we know we can use as a solid foundation. For example we know that there's only 10 types of operational controls which can be applied, 5 which protect through interaction with the threat and 5 which don't. We know that authentication will ALWAYS fail if either authorization or identification are stolen or misappropriated. We know that there's only 2 ways to take a physical asset- you either take it or you have it given to you. We know that operations require interactions with something and that something can be malicious. So we designed a way to reliably verify what we know and organize the information into intelligence.

Now that we were fact-finding, we found that much of what was assumed fact and turned out to be false came from opinions from authority. As a matter of fact, did you know that there's a huge, common body of security knowledge out there built mostly on anecdotal evidence and authoritative opinions passed around via transitive trust (X trusts Y and I trust X so I can trust Y) that is used as if it's all true? I know, I am shocked as well! So all this led to a general hack and slash through OSSTMM 2 leaving it as hollow as a pun at a funeral. We needed to start over using only the facts.

As we built the new OSSTMM as version 3, we began presenting and teaching these facts. I won't lie to you and tell you it was as pretty as a royal wedding in June. There was, ummm, "resistance". The consensus was that you can't deny the fact that some attacks are more persistent, more threatening, and more damaging than others. We didn't. Instead, the security industry wants you guessing how criminals are going to attack, which is often a psychological exercise of "thinking like a criminal" accomplished by people with nice homes, nice jobs, and a good night's sleep last night. Did you know you can even be certified that you can think like a hacker because you use the same tools as them? I know, I am shocked as well! They like to tell you that criminals follow a pattern but they really don't (see the Hacker Profiling Project for evidence of that). What we were seeing is the inherently unqualified opinions present in Risk marketed as fact within the security industry. Risk is a real thing. It exists. However the results from determining Risk is often made up.

Insurance companies use mountains of historical data to reduce risk. Wall Street uses mountains of trends current to the most recent second to reduce risk. Casinos use predetermined probabilities to reduce risk. As it turns out, the security industry uses quick response to reduce risk. Whether it comes from attacking our own software in vulnerability research, the use of AntiVirus to show us what's infecting us, or any of the hundreds of types of ways we have to show us we've been hit, security is an industry that uses current losses to protect future investments. Not only is that pretty dangerous but it's a horrible case of tunnel vision because it leads to defenses against specific attacks which had already happened. So the typical enterprise security today is one that is properly prepared to sacrifice something to an attacker now so they will be 100% prepared against it later.

For this backwards method we have to thank all those who think they should use Risk in the security industry. However they don't realize it can't work like it does in the other industries. For example, in security, types and areas of attack change with technology so the use of historical data like Insurance companies have is just not relevant. Unlike Wall St., we can't watch all the current trends with enough insight to know what the next attack will be for certain or with enough speed to react before they hit. Although that doesn't stop security from making it look like it does by secretly telling software companies of holes who then release patches for them which security experts predict will get reverse-engineered and turned into exploits and then are lauded for their predictive prowess by their followers when it inevitably happens (anywhere on any scale). And you maybe wondered why some security researchers get ticked off when you do full disclosure- because you're SOC-blocking their moves!

Another valid point is that Wall St. races other people to jump on and off trends whereas we need to race packets which travel at the speed of light. This also makes me wonder if the people who bought into "real time network monitoring" heard the fable of the tortoise and hare so often as children that they took it literally (or never turned on a light switch?). Finally, we also don't have the luxury of allowing some big losses like casinos where we can fix the odds and just hope to survive through the heavy hits because we'll win in the long run (although it looks like some government departments are actually trying this).

Now some of the Risk analysts within the security industry tell us that the problem isn't that we can't predict it but that there's too many data points right now to reliably guess the future. Basically, we need to get better at guessing. They say we need better models because then we can better forecast the problems. I see this approach in the other industries and I don't need to tell you how poorly it prevents financial meltdowns on Wall St., how exclusionary the guidelines are for getting pay-outs from Insurance companies, or how many lives are destroyed through gambling addictions at casinos. The truth is that in all other industries using Risk there has to be a loser. And the loser, unfortunately isn't the attacker. It's one of us. It's one of the ones we should be defending. It's like the story where the king feeds a maiden to the dragon every full moon to protect the rest. The dragon isn't losing. This is not the way to keep a town safe by sacrificing some of its denizens so the others can survive. What happens when another dragon shows up? And werewolves? And then the people turn into zombies? Threats change and come from unexpected places. The worst way to handle threats is to try to estimate them out of existence with Risk. Because it allows you to ignore some of the impact as inconsequential to the greater, or more selfish, number of beneficiaries. If you remember the story, the king didn't like it too well when it was his daughter that was fed to the dragon.

When we look at why we need Compliance it's because of selfishness. Businesses put their profits above defending their customers and business partners. Interestingly, the Compliance rules themselves are written to the greater good, which means that some companies won't be able to afford the required products therefore can't do business their way online. So the rules need to be lax enough so that only an acceptable amount of companies can't afford them. Still some of those who can't afford them will try to circumvent the rules to stay in business. But the Risk estimates will have considered this and make sure that only an acceptable amount of people will get hurt by those companies. What you have here is the use of Risk to further manage Risk and it's not working. We're just feeding the dragon.

At ISECOM we saw that what we needed was a way to create security so that the only loser would be the attacker. Which meant we had to do it without regard to the type of attacker, what their motives are, and what the probabilities are that they will only want to eat a maiden during the full moon. That's how we learned that you don't even need to know what the threats are or might be to defend against them reliably. See that's the funny thing because you are protecting against the unknown anyway. So if you don't need to know that then you don't need to know the impact of a particular threat or the result of a particular vulnerability either. You just need to know what limits your controls have on them and which operations are interactive with which parties. Now this isn't us saying that Risk goes away, no, not at all, but what we are not doing is looking for acceptable or "good enough" security at the expense of our own. So we do not use Risk to build our security. Instead we suggest you use the facts we know about security and the facts we know that give us reason to trust.

To build and verify security without using Risk, you need to learn the three main tools in the OSSTMM 3 which help you do this. Without them, you won't be able to do it successfully. Most importantly, you won't have to rebuild from scratch to do it. You just need to verify and categorize what you have and how it works.

The three tools are operational security metrics, trust metrics, and an OSSTMM 3 test. All three come from the same research but all three provide different intelligence. This leads people to get confused and find the whole thing to be overly complicated, apparently worse than guessing which is easy to do although nearly impossible to do consistently right (which is why in security these days being right isn't as important as showing your work because failing through status quo is also success in this screwed-up security culture with such acceptable CYA phrases like "If an attacker wants in they'll get in no matter what." and "There's no such thing as perfect security.")

The OSSTMM 3 test provides the following intelligence:

 1. What the scope is and which were the targets tested,
 2. What the test type and vector are,
 3. Classification and enumeration of interactive points (operations),
 4. Classification and enumeration of operational controls,
 5. What types of tests were NOT performed on the scope,
 6. What are the limitations of any of the controls,
 7. Which operations do not work as expected (usually provide additional, unwanted or unknown interactive points).

That is what you need to know in order to calculate the Attack Surface for which we use the ravs, a measurement like mass, which shows the balance between controls, limitations, and operations. The really good thing about ravs is that they are not weighted. Therefore the values of certain vulnerabilities do not come from someone's assumptions of impact but rather from which interactions you allow and which controls you have in place to assuage damage. This flexibility means that ravs can be compared regardless of target types or scope.

The OSSTMM test results are designed to provide a lot of different information clearly for the analyst. What it won't do is tell you which kinds of attacks are coming, how often, from where, and what the financial loss will be of that attack. But you do now have much more exact information to calculate those things if you want to because you know exactly how vulnerable each system is alone and collectively, the points of failure, the only places where an attack can be made, the lack of controls, and the redundant, useless controls. You know what wasn't verified and is therefore unknown.

You might not know if an exploit will happen but if it can, you'll know the paths it can take, what servers or services will succumb to which type of attack and therefore the only types of attacks you can expect to get through, and that which will not because they have the right controls.

Now to better organize that information we have the STAR and we have the rav calc sheet in open doc format and in XLS format.

The STAR allows you to give a new type of overview to your client which shows exactly what is deficient, where and why. It shows which tests were not done and why. It allows for future comparisons with other tests from other consultants. It allows for continuous internal verification and measurement of change or improvements. It allows a business to manage security based on need instead of speculation. Therefore, a business could address Compliance by having a particular percentage rav instead of particular products. It would turn an enterprise's security from being a reactive, consumer culture to a preventative, resourceful one.

The rav calculation sheet is how the Analyst organizes the information from the OSSTMM 3 test. A security test may require multiple rav calc sheets as a new one is suggested for each change in vector, channel (physical, wireless, data networks, etc.), or type of test (black box, gray box, reversal, etc.). All these can later be combined in aggregate for a "big picture" but for analysis purposes, it is easier to keep them separated. This sheet will let you see easily what needs controls, which are redundant, and which services should be closed. One of the more interesting things you'll see when you use it is how narrow the controls are in the modern "secure" network. Sure, it's defense in depth but that doesn't help you when you're protected by the same type of control to the core. Bypass one type and you bypass them all. Most all the modern security is focused on Authentication, which is interesting because the identification process everywhere is pretty bad and on the Internet it's downright awful. Next, you'd see some Confidentiality because of all the encryption being built into protocols by default. However it's Alarm that is the most prevalent control because modern network security is reactive. It's all about waiting for the dragon to show up and feed on one of your maidens before alerting the rest of the town that the dragon came back.

The rav calc sheet can be as granular as you want such as in the SCARE project which shows how to use it with source code or the companies who use it to measure web app attack surfaces need to do it by interactive points in the web app itself. So you get the info you need at your finger tips to make bigger, better decisions. One of the handy things about this is placing monetary values on the server, service, app, or whatever based on business process requirements. These provide you with historical business data from which to make business forecasts to compare to how much it cost that server, service, or app to make and what it costs (perhaps annually) to keep  it running and controlled. This sheet can be your sandbox. Right on the sheet you can play your war games by closing services, add the results of products you haven't bought yet, see what happens when a particular service is compromised or denied, etc. to see how much it changes the attack surface before you physically change a single thing on your servers. That rav delta can then be assigned a value base on operating costs and income from the business processes it is a part of to see if the new product gives enough bang for the buck or not.

Now trust metrics are almost a different beast. How they relate to the OSSTMM is that the factual information you get from verification can be used in the trust rules you generate to make a decision. The trust metrics help you fill the gap in the OSSTMM by helping you understand what cannot be verified or known by having you examine what your reasons to trust something are. Trust metrics you apply when you need to know how to approach the unknown. In that way it is similar to Risk but the similarity stops there. It lets you compare what you have and know to degrees of what you don't know in an even fashion. By only looking at what reasons you have to trust something new you avoid falsely speculating, something human beings are notoriously bad at.

For example, you would use trust metrics to determine if a new partner network should be connected to your own. Or how much access you would give to the visiting consultants. Or if you can depend on that new cloud provider. You could get rav scores from each network but that won't help you if they are secure against the world but malicious to you. So you use the trust metrics to determine how much reason you have to trust them and why. The properties you measure them against can be found here.  It has you evaluate reasons to trust against 10 non-fallacious rules and shows you which reasons to trust are strongest and which are weakest. Therefore, hopeless romantics beware, it may cause uncomfortable flashes of reality.

The end effect of trust metrics is that if you did this for each partner, you could create a framework contract that specifically highlights the weak trust areas to create greater assurance. Or you could say no and show them what you need before you say yes. Or you could make the financial rewards more substantial for yourself or the penalties higher for them. Or you can just give them less access with greater controls if you want to be politically correct about the whole thing. With trust metrics you act and protect to an acceptable level of interaction rather than an acceptable level of loss. What you definitely don't need to do is take a chance based on an estimate of the acceptable number of systems which could fall to the malicious attacker. Because that again would be just feeding the dragon.

Hopefully I've explained here clearly why we did what we did with OSSTMM 3. Combining the OSSTMM 3 verification results with ravs and trust metrics lets you build stronger infrastructures by looking at where you are strong against everything you have no reason to trust.

Now, whether or not you agree with what is said here, and some may have fundamental problems with our reasons for taking the OSSTMM 3 in the direction which we have, you cannot dispute the value of the information provided by an OSSTMM 3 test. Some of you may be wondering what the Risk would be to give up on Risk and try such a strange, new method. You can only answer that for yourself. Only you know if your Risk method of security will scale indefinitely with you, if the costs of speculation and response products and processes is greater than the actual losses for you, and if you have enough maidens in your organization to feed all the dragons who show up during the full moons.

Possibly Related Articles:
Enterprise Security Risk Management Risk Assessments OSSTMM
Post Rating I Like this!
John Verry Pete,

I'm not always sure I fully understand your thought processes and I am sure that I dont always agree with you -- but I do know that you always make me think about infosec in ways I would not normally and that I am better for that.

Many thanks --

Edward Fjellskål I find your approach a more "natural" one, and closer to my own way of thinking. Though you're fed with "other correct ways of thinking security" and I feel you when you say it has not been pretty and there has been some "resistance".

Wish you luck with OSSTMM 3 :) and thanks for your perspectives!
Edward Fjellskål BTW, you will still loose maidens, but this time you will not sacrifice them. But I fail to see how the dragons are not winning still? Did I miss something?
Pete Herzog John, Mike, thanks for your feedback! Unfortunately I was trying more to enlighten you than challenge you :/ Maybe I need to work more on my delivery ;)
Pete Herzog Edward, the point is not to lose anyone. Of course my biggest complaint is relying on a model to sacrifice some to save more but of course if you build correctly you shouldn't have to lose anyone. The risk that something will get attacked in your network is real. It's there. That risk won't go away. I'm not suggesting you ignore it. What I'm saying is that you don't build a reactive security model that requires waiting for an attack before building a defense specifically for it. Over time, you'll only have many very specific defenses that you will still need to manage on top of your detection for new attacks. Since your using specific controls (often the blacklisting kind) you will need more of them. More controls will also increase your attack surface. So you're just making things worse for yourself in the long run and still be open whenever the threat adapts or changes.

You need to build controls in width at all points of interaction but since that can get restrictive and also expensive, you should do so only according to trust levels. The fact that you have reason to trust assures the safeguards are in place for those you are then "trusting". This will create a defense that isn't patchwork, require periodic updates, or nearly as much maintenance. This allows you to focus on re-evaluating trusts so that you can use new technologies which make life easier or improve business.

To start, I would make all Compliance directives require a particular rav level instead of products so that the focus returns to operations. I would also require that all policies outline trust levels according to reasons to trust and show how they calculate that. It would not only simplify things for the auditor, it would make life safer for the rest of us.
Jeremy Wilde Compliance directives do not demand products, one of the unfortunate things about some compliance directives (like SOX) is that people look at products as a way to automate an unpalatable resource hit in getting and staying compliant.
Rules dont work, they encourage dumb conformance, and the legislators have mostly acknowledged this and now seek outcomes based compliance with the spirit and purpose of legislation. Companies now can take, yes, a risk based approach to compliance and build operations that show that outcomes are being achieved, they can do this however they want - manually, bespoke, third party tools and they could choose to do it by using the RAV - a simple algebraic exression of some nice security linguistics or the Trust metrics.
We see this in the requirements of SOX 2010 and a lot in the recent UK/Europe legislation - Solvency II and the RDR.
My article at refers.
Pete Herzog Jeremy, you're right that not all Compliance directives are product focused. Some focus only on policy which is even worse ;) But yes, you're right about SOX, then again it's also why I usually concede most Compliance points to you anyway. However, when writing that, I was not thinking SOX or BASEL II, I was thinking PCI-Date which does specifically mention AntiVirus and Firewalls, two products which many networks are better off without. I'm sure there are other industry compliance directives that do the same. I just need to look up which ones first :)
Danny Lieberman Pete

Some comments.

First of all - a big thank you for investing so much effort in promoting the notion of metrics.

Unfortunately - it's a pretty long-winded and rambling article and I personally had trouble following your train of thought and understanding what you want from my life.

It seems that you are trying to promote your security testing methodology but since the links provided in the article are all password protected files,I would tone down the "Open source peer review" market-speak until you actually do provide the source documents freely to your peers for review or find a different marketing mantra like "metrics methodologies on demand".

Now to the parts I did sort of understand:

1. I understand that you are proposing a testing methodology to security consultants on a subscription basis. Reading the article, I could not understand how you provide more value than testing to ISO 2700x which is probably the most comprehensive set of vendor neutral standards for security with enough eyeballs and certified customers and analysts to give it considerable traction across thousands of consultants and end user organizations.

2. Regarding risk models. I think you missed the key connection between metrics and risk models.

There are lots of models in this world - financial risk, credit risk, market risk, earthquake risk based on Levy distributions, operational risk models based on EVT and then there is the risk to data and information systems - which most of the readers of this web site deal with. Data and system security are relatively new disciplines and most security professionals lack formal training in mathematical / systematic models of risk based on empirical data. Note usage of the words "systematic and empirical".

This makes it difficult for the readers of this site to separate the wheat from the chaff.

Having said that - I think we can all agree that a cubic spline approximation to a time series of data breach incidents in a particular industry is not going to be particularly useful in forecasting a future event - even though it might be of utility for calibrating digital thermometers and predicting deviation from calibration.

What is useful (as you have pointed out) is collecting metrics - sharing them with business partners and using the metrics to improve your operations and products.

Metrics are a first step towards collection of empirical data in the security industry - and I definitely don't mean fluffy vendor driven marcom stats of the sort provided by the Ponemons, Symantecs and Mcafee of this world.

Once we have empirical data - we can start talking about what physicists call "physical world" models that EXPLAIN the data based on interaction of attackers, vulnerabilities, entry points, countermeasures and asset value.

Once you can EXPLAIN how attacks happen - then you are on the road to testing the model against the empirical data.

And ... in this world... that means holding up your model to public (not password-protected) scrutiny.

Best regards and keep those metrics coming

Pete Herzog Danny, I guess me an you both should take a class on improving our long-winded, rambling communications :) I really do wish I was more eloquent as a writer, I do. I took classes on it and practice but when I get passionate about something I end up writing like I speak which is mostly just rambling. Thanks for sending me your feedback anyway.

I disagree that you need to explain how attacks happen to have good defenses. We have enough knowledge about weaknesses to know that points of interactions need controls. Waiting to see how they happen is just the same as letting them attack you so you can learn. It's unnecessary. Of course that kind of application of security measures would ruin many Hollywood movies (absolutely ruined The Killers for me- well that and the acting).

Other than that, you're spot on with the "physical world" model which is precisely what we made (remember the bit about looking for the security particle). So yes, we made that model and here I am, explaining how we need to change what we are doing so as to make better security based on those facts.

I also checked all the links you say are password protected and can't find anything out of reach from the general public except the full version of the OSSTMM 3 draft. Now everyone who worked on it or did peer reviews of it has read it (there was a public call for reviewers but you must have missed all 50 of them). Incidentally, ISO 27000 series folks do have copies of the draft, yes, all of them from all countries, because they want to review it for inclusion in their next version of their standard. You like ISO 27001 so I assume you're also involved in it so you probably know that they weren't completely happy with it but they do like what OSSTMM does so they want to study how they can add it.

Anyway, so as not to be long winded, let me conclude by informing you that the subscription model is for the supporters who can't contribute to writing or reviewing but still want to read it. Often because they make money from using it. I mean it's really only fair that those who help work on it or support it get it first.

I look forward to reading your peer review of OSSTMM 3 when it's ready to be released for full, free, public peer-review.

Danny Lieberman Pete

Password protected

Understanding the physics of attack is essential to:

a. threat modelling and..
b. threat mitigation

You wouldn't use an anti-virus to mitigate a vulnerability in an ASMX file

Pete Herzog Danny, it's protected against change because of all the silly people who overwrite math fields by accident not to keep you from peeking at the fields. Hence why we posted the actual formula here:

We can prove threat modeling to be a waste of time. Threat mitigation is what the whole article was about. And you shouldn't use AntiVirus for anything if you actually want to protect something.

Again, we did break down the physics of all attacks as far as they could go and found, among many things, that they all are covered by 10 operational controls. So you don't need to study new threats to build your defenses because 1. it will always fit in the 10 op controls model, and 2. it will lead to patching against 1 particular attack. Every control you add increases the attack surface. You don't want that.

Jeremy Wilde Heuristics is mainly for security awareness and threat modelling is often not necessary - its more a focus on protection of the mission statement and increasing security maturity ie getting more bang for your buck
Dj Spydr Good Article!!! Unfortunately, putting it into practice depends on the stakeholder culture of the organization. I find it easier to determine my controls based on attack surface and overlapping controls - some folks can't comprend that and look at service point/risk control models because that is what their experience is driven from - as a result implementation is an educational hurdle.
Pete Herzog Thanks Dj! And you inspired the theme for my next article: the culture of easy insecurity. I blame the design of the modern human brain, but then again who am I to question the plans of the Overlords? ;)
Danny Lieberman Pete
a. I couldn't even open the file
b. How is your 10 control framework better than ISO 27001
c. How do you propose to determine the right security countermeasures without performing threat analysis
d. I'd love to see your proof of why you don't have to do threat analysis/modeling

Pete Herzog Hi Danny,

a. Sorry you can't open it. We try to remain within the PDF/ZIP/ODF formats so if you tell me specifically which one you can't open, I'll help you.

b. ISO has been working with us because the next version of the ISO27001 will likely contain the OSSTMM 3 framework to cover operational security. The problem with ISO270001 is that it's a great strategic document and I've even seen tactical checklists made from it but it doesn't touch operations like the OSSTMM does. Perhaps ISO will choose to not integrate OSSTMM there and integrate it somewhere else of make it its own standard. I don't know. I do know that the discussion is ongoing and you can contact your country's ISO representatives to find out more.

c. Threat analysis is tunnel vision. You assume you can guess all the threats and all the vectors. Instead we propose you verify that your interactive areas have Defense in Width, multiple different controls at those points, with more controls in less trusted areas and less controls at more trusted areas. And by trust, I don't mean your gut instincts, I mean via OSSTMM 3 Trust Metrics. Then you don't risk protecting against only specific threats and vectors but against all new threats as well, even the dreaded "Black Swan".

d. Try it. You can do it with an old Linux box or better yet, take an old Windows XP SP1 so you can add a wide array of controls to it via programs rather than manually configuring it. Don't apply the "typical" products AV, IDS, Firewall, and Patches. So lock down the registry and directories from change or unauthorized executions ala Winpooch whitelist (I deny everything first with ASK, reboot, and walk through what apps I want to use and allow specifically). Configure your online apps to run with HTTPS wherever possible. Limit app privileges ala Polaris, control online interactions ala Sandboxie, typically on browser, e-mail, MS Media Player, Acrobat, Java, and anything else that interacts with the outside world. Connect to the Internet via NAT. You will then end up with a working XP that NEVER needs threat analysis, patching or automatic blacklist updates again. That's Defense in Width.
Danny Lieberman Pete

I will try and open the file again.

No argument on the notion of operating secure systems but I think you and I have different concepts of what threat modeling/analysis mean and scope.

If you don't run any vulnerable services on a machine then you don't need a FW etc....That's obvious - and I don't need a fancy methodology to tell me that.

Threat analyzing a single machine is not effective.

But -

if you have a complex system of people and software and data then you should be prepared to perform a threat analysis starting with a clean sheet of paper and nothing but the brains of the people involved instead of locking yourself into some checklist that some committee wrote and was outdated the day it was publicly released.

In my experience even in monoculture Windows corporate IT environments - there is benefit from spending a few days once a year on modeling and analyzing their threats. The thought process is valuable.

And in a complex sensitive environment of embedded software development such as medical devices - it is a required part of the development process.

Also - I want you to consider that your "control" approach may not always be a best way of mitigating threats - often an offensive tact may be the best approach.

For example - no quantity of operational controls will mitigate the threat of working with an incompetent software house - when that happens you need to go on the offensive and sack the vendor.

Pete Herzog Danny, I used 1 as an example. My point though is that all your services can be "vulnerable" from the traditional patching perspective but you can still be secure through the right controls. However the ravs work great beyond 1 system as well.

The ravs scale however you need. We use them for large organizations even airports. They serve in ways threat analysis and modeling with people can't. Often, people are so close to it they are blinded by what they think they know. The ravs aren't. So they let you see these interactions clearly, especially in places like Human/Machine Human/Human, and Machine/System that the threat analysts often miss. That's probably the biggest benefit. So instead of thinking about what can happen, you instead look at what is covered. It keeps people from imagining things or restricting things based on their own biases.

As per your offensive tactic- that's in there. Operational security is a separation of asset and threat of which elimination of the threat is an option. However, instead of waiting to see if your software house is incompetent, the ravs would have assume all software houses are incompetent by default (0 days?) and you would control the environment around the software. Trust analysis would show then whether you can lighten up environmental restrictions as the software house improves and financial analysis will show if environmental controls exceed budget restrictions due to said software house. And none of this requires a group of people sitting around imagining threats for days. Accuracy and efficiency is increased. So it's time to change the requirement of threat analysis because it's based on human bias and limited imaginations of a few. The fact that medical devices are still in the news with vulnerabilities should be proof enough.
Danny Lieberman Interesting.

You should definitely check out PTA - Practical threat analysis - at

they have created and implemented a practical approach using a model of assets, threats, vulnerabilities and countermeasures.

what is of particular interest (and utility in my opinion) is the ability of PTA to prioritize the control plan by considering control costs and asset values.

to the best of my knowledge - they launched the methodology about 5 years ago and have over 40,000 downloads and thousands of active users world-wide.

I'm sure they'd be interested in you developing a PTA library for your OSSTM method and from a biz dev perspective - it would give you exposure to a large number of practioners already using the PTA tool

Regarding your disparaging comment on "people sitting around imagining threats" - any system analysis is only as good as the data collected.

When we do a TA for a client we collect data from as many sources as we can - logs, people, social, engineering....

Good luck

Pete Herzog Danny, thanks for the info. I've been aware of PTA since its first tool release. We tried to find a place to work together but time and resource issues have prevented that so far.

It's possible you are not aware of the OSSTMM because of your focus in Risk analysis or because you are in the USA and we have almost no penetration there. The OSSTMM has been around since 2001 and has nearly 4.5 million downloads since then. We have trained thousands of students in the methodology of which many in the US by special invitation by the NSA, FBI, and the military. So it's not really small and many people do find it to be the more acceptable method for them. The only thing I'm doing differently now is blogging and holding seminars over what I've learned which is what is new. Previously, I've kept my comments to myself because I felt it was too early to start talking about it. But now is the time for change and I hope that by introducing the possibilities to people for all industries, all vectors, all walks of life, they can use our research to improve themselves, the safety of their families, and their jobs. This is why we've been broadening things with other projects like the Bad People Project and the Home Security Methodology.

I didn't mean to be disparaging however I have actually seen this be part of "war room" efforts at many companies. I shouldn't have implied that you do that. Sorry. However the point of the OSSTMM from its inception is thoroughness- to collect the most possible unbiased data from many sources as efficiently as one can for best analysis. Perhaps you'll find that part of the OSSTMM to be helpful to your efforts.

Page: « < 1 - 2 > »
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.