Logical Fallacies and the SCADA Security Problem

Friday, October 14, 2011

Craig S Wright

8b5e0b54dfecaa052afa016cd32b9837

 

The arguments for extreme events are interesting and one has to wonder about the motivation for these. The argument that a “STUXPOCALYPSE” will not occur and hence we need not worry about security of critical systems is astounding.

Straw man

This first logical flaw is an argument based on misrepresentation of an opponent's position.

The argument is not one of apocalypse, it is one of widespread damage. Here, 100 deaths and a few million dollars is considered widespread damage. This is quantitatively different from the end of the world.

The point is not if an attack against SCADA will result in the end of civilisation as we know it. Even World War II did not manage this with all the damage it created and a SCADA cascade will not do something as dire.

So one has to wonder what the motivations and imperatives are for those who attempt to downplay the security concerns surrounding unsecured control systems.

Using rhetorical tricks in order to mask the concerns around the security of control systems and to downplay the nature of these threats smells a little fishy.

False dichotomy or the fallacy of bifurcation

This brings us to the next flaw in the arguments. The supposition that only two alternative statements are the only feasible options. This of course is not the truth and the arguments here abound. There are many more possibilities than those who seem to want to hide the security flaws in SCADA systems will allow.

The attack and compromise of a PLC using fine control is seen as the only issue. More, the only attack vector is promoted as those sites being easily found through a Google dork search.

First, those sites that are both online and result in a discovery using is simple search engine query are the vast minority of sites. For each site that is poorly configured enough to have been indexed through a search engine, many hundred exist online that have not been indexed.

In fact, none of the systems I have written about in recent weeks is accessible through a simple web search. That does not mean they are not accessible through the Internet.

NAT and simple technologies leave these systems obscured but online. Here we see that there is some avoidance of obscurity. This is a poor security control. It may help alleviate simple scanning worm attacks to some extent, but the reality is this is only to some extent.

Most of the attacks against Internet connected hosts are not being targeted against the sites you can find on a Google dork list. They are more and more targeted against internal systems where a compromised client system is leveraged to attack the internal systems.

An external attacker with a flash based exploit, a re-pinning attack against the clients JRE, or for that matter any number of malware and crimeware based exploits can bypass simple firewall and NAT controls. ATM networks associated with St George that are supposedly offline were impacted through a worm infestation. Rail Services Australia managed to have a scanning worm inside the secure network a few years ago and just recently we have seen the US army’s drone network compromised by a password sniffing trojan.

Just being behind a firewall or a NAT device does not make you offline. It does stop some of the simple Google dork searches, but these have only ever been the tip of the proverbial iceberg.

argumentum ad ignorantiam

Next, we have the oft cited claim that is assumed to be true (or false) simply as a result of having not been proven false (true). In some cases, these are claims that cannot be proven false (or true in the converse).

I face this one in court from time to time as well where it comes to the extreme. In one instance, the barrister for the opposing party to whom I was acting for as an expert witness decided as they could not attack the results I had obtained (the opposing expert having stated the same in a published paper that he neglected to note in court) attacked my beliefs. I have a degree in theology (as well as in law, various sciences, mathematics, management and more) and I am a trustee and from time to time a lay pastor.

I was told in court that I cannot be a good scientist as I believe in imaginary beings (I believe in God). Basically, we have here an argumentum ad ignorantiam, an argument that cannot be either proven or disproven through science. That does not stop it from being deployed as an argument.

At the same time, we see this time and time again in calls to leave things as they are, to let sleeping dogs lie and to remain with obscurity and our heads in the sand safe in the knowledge that what we cannot see (foresee) will not hurt us.

But for SCADA systems, we have “I do not see how therefor it cannot be”. In this, we look at the effects of attacking PLCs and the differences in these systems and simply forget that most of these are controlled from Windows based systems. That LynxOs, Windows CE and more act as agents.

Again, we assume this needs to be a nation state effort such as Stuxnet and forget that was a system designed for fine control and not simply chaos. Chaos is far easier to achieve than fine control. It takes a lot of effort, skilled people and technical knowledge to create a system that can be automated and left to run remotely.

Breaking a system… that is far simpler.

Red herring

One of my old favourites that is so often used is the attempt to distract one’s readership (listeners if live) by going off topic. That is to deviate from the topic at hand. In this, we can add a separate argument which the author believes will be simpler to address and to run from the topic at hand.

There is a qualitative difference between cyber-terror and kinetic terror events.

Yet we see responses such as “For that matter, one could just get some C-4 and get a job at the facility long enough to plant a bomb”. Well yes, we could and having completed a degree in Organic chemistry specialising in fuel sciences (over a decade ago now) I also know just how likely you are to remove several fingers in the attempt to make it.

Yes, it is possible (although not as simple as the movies would make out) to obtain C4, Semtex and other forms of explosives containing RDX (cyclotrimethylene trinitramine) and PETN. But there is nothing on how these are peppered with 2,3-dimethyl-2,3-dinitrobutane (DMDNB) so as to both trace the source and also as a detective control.

Unfortunately for Bruce Willis, it is not actually as simple as it seems to sneak large quantities of C4 into Federal buildings unannounced anymore.

Fertiliser based explosives are easier, but even then you can expect to be investigated from time to time and there is a level of risk with any kinetic engagement these days. This is why for all the people out there wanting to blow things up in the US that it remains a rare event. It is not easy and not all terrorists want to blow themselves up in order to achieve an objective.

This is why cyber-terror is qualitatively different.

You can access an online system from anywhere in the world. The independent hackers (cough FSB sponsored) in Russia who attacked Estonia and Georgia never suffered any repercussions. In fact, it is not as simple as people think to organise a large scale kinetic attack. This requires a high degree of co-ordination and effort.

On the other hand, hackers have managed to obtain access to critical systems by accident. Here, we are not even thinking of the efforts of a former and disgruntled employee in attacking a water treatment plant, of course also getting caught as he was stupid.

Then, even the Large Hadron Collider and US Drone control stations have been compromised without any real repercussions for the lead perpetrators.

That is what is really different here. To blow up a facility, you need to spend a lot of time effort and money learning systems, building reputation and more where you most likely have only one attempt (and which as history shows us fails more times than it succeeds even if we remember the successes and forget the failed attempts).

To engage in a cyber-terror exercise on a vulnerable system requires skills that also allow an attacker to engage in cybercrime and to hence fund activities (and lifestyle) whilst remaining relatively anonymous. More, you can be seated comfortably anywhere in the world and as one detractor showed, you can even simply do a Google dork search for these systems and chose what you feel like opposing AFTER you have selected a target to attack.

Ad hominem

Staying with a Red Herring, we have a very special for of this, the Ad hominem attack where we attack the person so as to avoid facing the actual argument.

Here, we see comments such as “Please go back to writing entry level forensics books”. Not that writing guides for people starting in a field should be seen as a detractor, but ignoring that that does not mean we also do not do high end academic research. But that would not suit the argument and would not allow the attack to seem as belittling.

This also comes in the form of an Appeal to ridicule where statements such as “For the apocalypse of stupid that will be happening thanks to the likes of CNN and the book of Langer and Wright.” are used as an argument and the attempt is made to present one’s opponent's argument as being ridiculous. It is not actually a valid argument, it is just a form of petty attack.

I guess this manages again to bring up back to the straw man that has been supposed. In arguing widespread damage, it seems that this must be a Revelation level event or nothing we should be concerned with? I wonder myself what ever happened to middle ground?

Appeal to motive is next and here we have a situation where the premise is dismissed through a question of motivation. That is by calling into question the motives of its proposer. The basis is to say that this is all about money or similar. There are a number of flaws with this argument, not least of which is that I donate most of the SCADA time and in making more work in this area simply make life more difficult for myself. Basically I do this as it helps the people I care for. Then, motive was never a valid argument in any event.

I am still awaiting many of the other Ad hominem attacks such as:

  • Poisoning the well: Here adverse information is stated in order to discredit ones opponent. It can be true or not. It does not of course relate to the argument at hand. I did state one example above. Saying I believe in God (as a bad thing) as an example of why I cannot engage in scientific discourse (I also believe in evolution).
  • Appeal to spite. This is a specific type of appeal to emotion. In this fallacy, the argument is made based on an exploitation of the listener’s (reader’s) bitterness or spite towards the other (opposing) party and/or that party’s beliefs, position etc.

Argumentum ad nauseam

This is an argument such as “We have discussed the security issues around SCADA for years, and that nobody cares to discuss it anymore”.

Well, SOME people do not want to discuss this anymore. Then, nothing is making them do so. In fact, in actually engaging in the argument, they disprove this argument in their own actions.

onus probandi

This is the logical fallacy based on a premise that the other party need not prove their claim, that we must prove it is false. Not as a hypothesis or any other such thing, but just as a matter of fact.

They cannot of course and hence we see this again and again.

argumentum ad antiquitam

Here again we come to a conclusion supported that has its sole support in the matter of history. This is, it must be true as it it has long been held to be true.

The argument goes along the lines, we have not seen many SCADA attacks, thus there cannot be any SCADA attacks.

Well, the fact that we have not seen an event does not make it improbable. In fact, we have the issue here that the class of events in the 90’s was distinct from those in this decade. We are more connected and more systems are vulnerable.

fallacy of the heap

How about we improperly reject the claim that SCADA systems are at risk simply due to imprecision. That is, as we cannot state which systems will be attacked and we cannot state exactly when this may occur, that it cannot ever occur.

Ummm… It seems that there is a consistent flaw in all this.

I can add many more fallacies…

Ignoratio elenchi

This is the constant use of irrelevant conclusions that miss the point. In some cases, the argument is valid in itself. However, it does not actually address the issue in question. SCADA systems are running insecurely and the compromise of these systems can lead to a loss of life.

One such example would be the compromise of rail signalling systems. This could lead to a peak-hour collision of two oncoming commuter trains.

  • Is this the end of society as we know it? No.
  • Is this a tragedy? My God yes!

That is the point. Extending the loss of life to an argument where it is only valid if the entirety of society collapses is ludicrous at best.

Kettle logic

Here we see the use of multiple inconsistent arguments to defend a position.

EMP’s Man Made & Solar… Now There’s Your Apocalypse

Well… How about FUD?

Let us ignore the fact that making any real device that has a large scale effect is both difficult and expensive (and range limited) and jump to something that is truly FUD.

Economics 101

We have systems that are not difficult to secure. We say they are, but the reality ids people are the impediment and not the technology. In some cases, securing these systems will create a positive ROI from day 1.

More, we have a situation where small investments can forgo large losses.

The argument is not that civilisation will end, but that small incremental improvements, some that do not actually cost money or even time can make us much safer.

Economics is all about incentives. It is creating systems where people and groups do the right thing. Right now, we are creating externalities and not allowing those who have failed systems to be responsible for their failures.

The reason for this is that it costs money to implement a secure online system. If you can get away with not securing a system AND not have to face the consequences of a failure (when and not if) you have an economic advantage over another party by securing to a level that any reasonable groups would expect.

I for one have to wonder at the vitriol that some individuals hold for society if they can simply treat the loss of life and property as inconsequential simply as it has not resulted in the complete collapse of society.

Incentives

Right now we incentivise poor security practices. Those firms and organisations involved with SCADA systems who actually care to secure their systems are penalised. When we create negative incentives in bailing SCADA operators out from the trouble they have caused in running insecure systems and yet fail to offer any positive incentives to those groups who actually act in a manner that is consistent with giving a damn, we create less secure systems.

So, SCADA systems are online. We seem to have agreement that you can even get these (and this is the tip of the iceberg again) with a simple SCADA search. These are systems that have large scale effects.

Yes, it may be true that damaging a Nuclear reactor in a manner that results in a meltdown is really beyond anything less than a nation state, but so what?

Loss of power to a city for a few days will result in lost lives (and I happen to care about the extremely young, old and infirm and others that seem to be overlooked in the opposing argument).’

Again, WHY are some people trying to defend poor practice and NOT take SCADA operators who are ILLEGALLY running systems online to task?

Why do some people want to continue to incentivise poor security?

Where does this leave us?

World War II was a global and catastrophic event, but the earth still stands. So, do I think the Earth and civilisation will come to an end due to SCADA flaws (or FUD such as EMP/HEMP devices)?

No!

What is at stake is the loss of life and property that will result from compromised SCADA systems. Not just PLCs as the opponents of this position like to try and presuppose, but Windows XP and other systems that act as controllers. A trojan on a Windows host allows an attacker to control the PLC without actually writing specialised malware such as Stuxnet.

You think this does not occur… Well there you are wrong. The dumping of sewerage in Queensland (here in AU) cost millions to clean, it cost businesses revenue, it cost jobs and it also meant that many people in the area where unable to enjoy their properties in safety.

Well, I am the Australian in this “debate” so I am wondering why it is the other side who is making the “don’t worry she’ll be right mate” assertions?

About the Author:

Craig Wright is the VP of GICSR in Australia. He holds both the GSE, GSE-Malware and GSE-Compliance certifications from GIAC. He is a perpetual student with numerous post graduate degrees including an LLM specializing in international commercial law and ecommerce law, a Masters Degree in mathematical statistics from Newcastle as well as working on his 4th IT focused Masters degree (Masters in System Development) from Charles Sturt University where he lectures subjects in a Masters degree in digital forensics. He is writing his second doctorate, a PhD on the quantification of information system risk at CSU.

Cross-posted from GSE Compliance

Possibly Related Articles:
6683
Network->General
Information Security
SCADA Cyberwar Operating Systems Stuxnet Network Security Cyberterrorism Programmable Logic Controllers
Post Rating I Like this!
C70bb5cfd0305c9d18312d92f820c321
Gabriel Bassett May I suggest we've passed a fundamental point in SCADA (or, to generalize, Industrial Control System), security? The point at which we understand it. We understand how it works. We understand that if(when) exploited, it won't end the world but it could have significant consiquences. We understand that, while the systems are very vulnerable, securing them is pretty strait forward.

All thats left is to start doing the actual work which, as you point out, will primarily be based on business/economic decisions of the organizations involved. As security professionals, our job is to now step past risk assessment (i.e. likelihood and impact) and provide those we serve with security solutions for their industrial control systems that they can live with.
1318960626
8b5e0b54dfecaa052afa016cd32b9837
Craig S Wright @Gabriel. Exactly.
1318970391
12409ce9bfe790dff1abb02212b55017
Kenneth G. Hartman As an automation engineer turned security guy, I see the increased focus on the security of SCADA and industrial control systems to be a good thing. Controls Engineers love to learn best practices and we do understand risk management. After all, we did invent the hard-wired E-stop circuit. So please keep the automation security awareness topics coming and I know that if you do, you will gain an additional audience segment here on InfoSec Island.
1318993438
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.