The current security model is crazy. And the current crazy testing methods actually make it look like it's not.
I think that's why so many people fail to see how broken the current consumer-ready security model is. Look at the current attacks and how security companies, even HUGE ones with their security measures and countermeasures built on this model are letting the people hang.
But I'm jumping too far in already. Let me step back a moment and thinkcast. Who tests operational security better than the hacker? And by hacking, I mean the concept of knowing intimately and deeply how something acts in its environment in order to influence it the way you want to.
(You might like this definition of a hacker but nowadays most of the world sees a hacker as a synonym for "Internet criminal". Of course it's wrong and probably induced and perpetuated by the media, but I think it is a reality we have to take into account. So in this following explanation about what a hacker is and does, perhaps I need to first make explicit that I'm not referring to just any "internet criminal" although there are many which are hackers according to this concept. But sadly usually, an Internet criminal doesn't need to intimately understand the operations of it's target. For many such Internet criminals it's often sufficient if they just find a working exploit that was discovered and created by someone else.)
"Hackers" are the ones who interrogate RFCs and dwell on source code looking for even the smallest logic errors or the most abstract combination of timing and happenstance to make the seemingly impossible (You can imagine the developers saying, "Yeah, but for all those things to happen at once will never happen") possible. Hackers do this not by being sneakier or eviler. They don't particularly know the security best practices better than the policy writers. They aren't even always crack programmers or even secure programmers. What they are is more deeply knowledgeable about how a certain thing really interact and co-depend across a variety of environments. They teach themselves the clock work of something very complex like interacting protocols and operating systems and they keep teaching themselves as these things change. Over time they learn that no matter what changes in OS, protocols, programming language, etc., all these complex systems have certain elements or certain conditions that if designed a certain way or just missing will lead to a security problem. So hackers learn to look for these things and dig in where they find them.
Now the investigative role in hacking has mostly been taken over by vulnerability researchers today. That's great because it creates a huge body of knowledge to extend the capabilities of the professional hackers like penetration testers and ethical hackers. This means better test coverage of a larger scope including even more products and services. This leaves the professionals to focus on getting deep into the cogs and springs of operations to find those places to dig in and then apply the vulnerability research to verify if the problems really exist.
Except now the application of vulnerabilities has been taken over by scanners. I'm not saying they do it better. I'm saying the market has sadly, unfortunately accepted scanners to find and report vulnerabilities in their standing infrastructure as a cheaper, good-enough alternative. Find a hole and throw a patch in its face. We've got gilded industries around it too like vulnerability management and patch management.
So all that's left of the hacking for the professional tester is the big picture part of analyzing the interactions of the whole operations. They can investigate how things work in the various environments, how they interact, the resources they trust, share or squander, and how this combines with the vulnerability reports, the regional and company culture, the chains of trust, and the assets. Except sadly, usually, nobody hires hackers to do this. They hire MBAs and risk managers to look at the vulnerability reports and compare them to an industry baseline and the various required compliance objectives to make threat trees, risk scenarios, and the assorted matrix.
(When these MBAs analyze results they sadly, usually spend their time focusing on something from the past and which is kind of general. Sometimes the threat they analyze and will try to protect against has no chance anyway in your controlled operational environment because it's a controlled operational environment. In plain speak, they match a banner to a CVE and apply a CVSS without thinking about the controls which affect the operation of the threat within a controlled environment simply because they don't have that intimate knowledge about how their own systems operate in their environment. But a hacker does. This difference is what distinguishes hackers from risk managers, leaders from followers, specifics from general, future from past, success from failure, and us from them.)
So who verifies security operations? Who tells you the big picture of what you have, how it interacts, and what it needs, based on how it works and how it should work? Not the penetration tester. Not the ethical hacker. Not anymore. Sadly, unfortunately they've been marginalized to running scanners and eliminating false positives and negatives. They have had their scopes restricted to test only specific infrastructure components only in certain ways as required by compliance objectives. They are used to shock, scare, or leverage upper management or the Board to make a bigger security budget to pay for more vendor licenses. They have been made the spokesmodels for looking hardcore while proving a negative to protect corporate interests. And that's really the one trick expected of them. It's the only one anyone's really buying. Yes, in the modern, commercial and corporate security world, professional hackers have been reduced to being a one-trick pony.
The professional hackers who do what they do best in the ways that are absolutely critical to organizations have been marginalized into near extinction with small pockets surviving in niche work like crime and espionage. The professional hacker who somebody could've hired would have told them how they need to balance their trust for their vendors with specific operational controls and beware contracts that use phrases like "best efforts" and "timely notification".
So I wonder how many pen testers with clients using RSA authentication caught this problem in advance and how many advised their clients of the missing compensating controls from that trust? How many calculated and quantified it to show that there was a serious imbalance? How many pen testers were actively testing for cross-channel trusts?
We wrote the OSSTMM 3 to address these things. We knew that penetration testing the way it continued to be marginalized would eventually hurt security. Yes, the OSSTMM isn't practical for some because it doesn't match the commercial industry security of today. But that's because the security model today is crazy! And you don't test crazy with tests designed to prove crazy. So any penetration testing standard, baseline, framework, or methodology that focuses on finding and exploiting vulnerabilities is only perpetuating the one-trick pony problem. Furthermore it's also perpetuating security through patchity, a process that's so labor intensive to assure homeostasis that nobody could maintain it indefinitely which is the exact definition of a loser in the cat and mouse game. So you can be sure it also doesn't scale at all with complexity or size.
You see we at ISECOM knew that many penetration testers were still those same hackers who could work with operations and bring real security value. They were also the people in the best position to bring change to the security industry because they could consistently poke holes in the vendor-driven security model of authentication and encryption and still build better operational security despite it.
So we realized if we could show the penetration testers that there's more to this birthday party than the one trick expected of them then they'd get it. So we did the research. We got great minds together and continuously ran real-world tests. We made the OSSTMM 3 because we don't want penetration testers and ethical hackers to be just the face of compliant corporate security. We want them to really be hackers again. We need operational security done right. For the sake of security in this interconnected world where the butterfly effect is much more than a theoretical description of how we are all required to keep some of our eggs in someone else's basket. We NEED them to be hackers again.
We NEED them to be the authority for how operations and security work together from the CPUs to the personnel. We NEED them to add a scientific method to their testing to assure validity for their efforts. We NEED them to be able to categorize and quantify their results to provide an unbiased foundation that risk managers can use as real data now and not rely so much on historical data or hypothetical baseline data. We NEED them to quantify specifically how much trust is not enough and which security is exactly too much so they can create and manage controls based on valid trust scenarios. We NEED them to hack through all this crazy, security BULLfrak that too many security people are pretending not to notice. Because if we can't make penetration testers into hackers again, we'll see a lot more companies and governments getting their asses handed to them.
How to do this is in the OSSTMM 3. It breaks down all those certain elements and certain conditions which hackers learn to look for that point to flaws. It tells you how to see when something designed a certain way or missing certain things will leave a door open to a specific type of threat. So if you haven't read it, do so. If you have read it and didn't "get it" then take another look and think about it in the terms outlined here. It's designed to make you a better hacker. It's designed to remove you from the crazy and help you get a whole new sense of security.