Conferences are a great place to have conversations over dinners or just standing around in a hallway between tracks. This topic is one that I am a firm believer in, but most recently I heard my friend Chris Nickerson of Lares talk about this very same topic the very same way.
In a nutshell, if you (in information security) haven't broken things in your organization, you're likely terribly unprepared for when things to wrong and thus are doing it wrong. Now, before you come all unhinged, read the rest of this post.
- How certain are you of your backups, assuming you keep backups?
- Have you ever tried to restore your system from a catastrophic failure only to realize your restore options weren't reliable, corrupt, or otherwise compromised?
- Have you ever caused a catastrophic failure on purpose?
What in the world does this have to do with Information Security, you ask? Everything.
As we discussed over dinner with our small group, a vast majority of the organizations that have what they think are fantastic information security programs are terribly under-prepared for when things go really badly. The core of the issue comes down to knowing that the steps you've taken to prepare your organization for response and continuity actually work.
Having any reasonable assurance of things actually working requires you to test. Unfortunately, in order for many organizations to test their security, really test their security, things would have to go terribly wrong. The result is simple, and predictable - they only test the things they're comfortable with.
You see organizations do this all the time! You'll see a company write a scope of work to specifically exclude weak systems during a penetration test. They do this because they already acknowledge that those systems will fall prey to easy attack and there is no reason to 'bring them down since they're already known weak'... Interesting thinking.
I've actually witnessed this first-hand and in fact almost got fired from a fortune 100 for performing an active test against a system that we knew had severe security deficiencies about a 7 years ago.
In the case I described above, an organization knows there are weak systems out there and simply chooses to ignore them and take those systems, applications, or networks out of scope from necessary security testing. The issue may go as far as not patching, not performing any type of automated scanning, or reporting on segments or systems that are 'known weak' because that may further expose the issue.
In fact, weak systems exist everywhere, and ignoring them only makes the issue worse. Acknowledging that we all live with something we can likely do little about but that causes our overall security posture to be weak is necessary.
Having those critically weak systems show up consistently on a post-testing report as easily breakable will underscore not only their value to the organization but also their value as an attacker's asset - which may ultimately lead to their security posture being remediated or at least the installation of some compensating or mitigating controls.
How effective is your IPS system? What about that expensive DLP system you put in? What about that SIEM? You're never going to know until you run drills and exercises that test those systems and stress their utility.
I'm not telling you to unleash some nasty worm on your network to see if your defenses are up to the task - but I am highly suggesting that you do that in a controlled environment that closely simulates your real network because then you at least have a high degree of probability that the observable outcome is what will happen on your real network. If the HIPS and other defenses do their job and quell the onslaught... fantastic. But how are you going to know until you actually try this?
How's that incident response policy? Collecting dust since it was written a few years ago? Maybe you've updated it and feel rather proud that it's now up-to-date with this year's standards and methods. While this is fantastic... until you test that incident response policy you have absolutely no idea of its value. Netflix are geniuses when it comes to this. If you've never heard of the "chaos monkey" you need to do some light reading.
They've actually implemented a series of scripts that randomly destroy processes, machines, and systems. That's right, they have a piece of software (or rather, many of them) on their network that tests the resiliency of their network every minute, of every hour, of every day. That's no small feat that you can still stream your movies while that chaos monkey is on the job!
By the way, this suggestion should scare you a lot. I know if I'm the CISO and I am told that the only way to effectively test my incident response policy in case of a catastrophic intrusion is by actually paying someone to create a severe intrusion in the middle of the busiest day we have and then sending my response team into action (no, not in a 'lab')... I'm nervous as he**. If that policy fails, we as a team fail. If that policy succeeds, we survive. Here's the catch - either way you learn something.
If you're savvy enough to convince your senior management that you need to test your incident response policy and other security defenses you're a grade-A CISO in my book. Now, what I'm not talking about here is just run-of-the-mill penetration testing.
That's generally heavily scoped, run off-hours, and specifically controlled to keep from causing an issue... this virtually guarantees you the organization being tested are getting little value - unless this is the first time you've ever been tested in this manner. Still... I think it's foolish and a waste of time.
As I'm suggesting, in order to be effective as a security organization you need to break things. You need to break things not just because they're in scope but because they're there. You need to break things that no one's prepared to fix ... because only then will there be a wake-up call to fix them.
Don't forget to do a lot of diligence and "CYA" to make sure that you're not going to get fired when you do this... but preach the necessity to validate your protective strategies, technologies, and processes into the board room. Your organization expects that your IT department will run "DR drills" for cases where data centers fail ...why not run IR drills for when security incidents happen?
We already know security incidents will happen... how sure are you that the policies, procedures and technologies you've put in place will hold?
Cross-posted from Following the White Rabbit