Cyber Warfare Decomposition... Fail

Wednesday, March 23, 2011

J. Oquendo

850c7a8a30fa40cf01a9db756b49155a

Hopefully you would have read the introduction into the “Art of Cyberwarfare” series to understand where this is coming from and where it is headed.

With that out of the way, let me dive into “Cyberwarfare Decomposition ... Fail.” From the Intelligence Analysis book: Decomposition:  Breaking a thought or activity into basic elements to discern meaning or facilitate a more complete understanding.

Decomposition of a cyberattack is a bloated, misunderstood and fantasy filled science usually with one outcome, wasted resources. In order to understand where I am coming from, I will write this rambling from a hacker's perspective. From an attacker's point of view, I can be whomever I choose to be, whenever I choose to attack you.

Unlike conventional wars, an opponent is usually going to be visible at some point in time. Visible to the extent that intelligence analysts will know whom he or she is. The analyst will know an attacker's weaponry, locations, capabilities and so on.

Someone would have placed a lot of resources in digging up the information. Whether via HUMINT, SIGINT, COMINT, IMINT, ELINT, MASINT, ACINT or whatever other INT I missed, there is some form of tangible/visible data on an opponent.

In the realm of the Internet (cyber realm), you will fail miserably if you think that you can pinpoint an opponent via an IP address or even collection of addresses, a signature, a comment in an application and so forth.

To prove point one of that comment, I recall a discussion years ago on the North American Network Operators Group (NANOG). The thread discussed how the European Union "made IP personal."

As reported in the news: “IP addresses, a string of numbers that identifies a computer, should generally be regarded as personal information, the head of the European Union's group of data privacy regulators said Monday.” [2] This started a long discussion about how absurd of a statement, theory, thought, notion, etc., the European Union just blurted.

From the NANOG thread:

Well, let me ask you you think 171.70.120.60 is. I'll give you a hint; at this instant, there are 72 of us.

Here's another question. Whom would you suspect 171.71.241.89 is?  At this point in time, I am in Barcelona; if I were home, that would be my address as you would see it, but my address as I would see it would be in 10.32.244.216/29. There might be several hundred people you would see using 171.71.241.89;
” [3]

Imagine for a moment that I compromised a machine on the subnet mentioned in that thread, who would you (after being attacked from that address) investigate or retaliate against if you were in a cyberwar where you had to launch an offensive? This lack of pinpointing an attacker is, and will continue to be, the problem: attribution. Who do you place the blame on.

Furthermore, from the deception level, it makes all the more sense for me as an attacker to utilize the core functions of the Internet (IP) as a means of hiding. “Catch me if you can.” With millions of vulnerable machines worldwide, an attacker can launch an attack from anywhere with almost no attribution. This makes any analysis pretty much useless for the most part, wasted resources.

When security professionals disclose information about "command and control" botnets, they almost always have the information regarding a) what the botnet does b) what it is targeting c) how it is affecting an infrastructure d) where data goes to and where does it comes from. Sometimes an analyst or team of analysts will get a through d or just enough to help them understand what the SOFTWARE was created to do not what the intentions of the author was.

Perhaps the analyst had discovered a rogue program in the wild and the bits of software were reverse engineered to figure out what was being done, maybe someone stumbled upon the software. Fact is that at the end of the day, what have they found as an analyst?

An analyst needs to remember that proper programming of malicious code could leave an analysts with one of two results: false flags or nothing at all. Remember, I wrote “proper programming” meaning that I properly took the time to make myself invisible. I took the time to zero out any identifiable information. Or, perhaps I picked an ally of yours and inserted false flags that point directly to your ally. What then?

When you think about the reality of cyber analysis and the issues surrounding the data in that analysis, even from the reverse engineering perspective, any answers speak for itself: wasted resources.

Think about this for a moment: “so many analysts working on reversing this botnet” it would have one believe that at some point in time, at least “one” developer of a command and control, would be perp walked by now, their identity disclosed, game over. Have you ever wonder why this is not the case?

If you have, then you answered it in similar fashion to mine from my previous article: Cyberwarfare Analysis - You're Doing It Wrong. The attackers are anonymous: fail - you're doing it wrong. NOTE: When I use the term anonymous in my writings, I am not talking about the “hacktivist” group.

From the attacker's perspective I say to the analyst and I repeat: “Catch me if you can.” The likelihood of an analyst coming close to identifying me is the same likelihood that I will win the lottery in multiple states simultaneously. This is of course based on me having a direct plan of attack and exit plan.

Yet to be quite honest, an exit plan is not even necessary thanks to millions around the world, who in their rush to remain online 24/7, make finding a random connection for me to abuse, child's play.

Connectivity (hackers definition): an abundant resource especially in cities which one can fire off anonymous attacks against any target of choice without worry or repercussion. Businesses and individuals are quick to throw up all forms of wireless connections, Internet cafes are abundant and people are outright mindless when it comes to connectivity. You can't fix stupidity which will always be yet another anonymous enemy attacking you. Fact is, stupidity might be your second biggest enemy.

As an attacker, I could say head to Bryant Park in NYC, change my MAC address [4] and begin karmetasploiting [5] anyone in the vicinity. I could steal credentials from someone near me and begin hopping in and out of other networks launching attacks with the credentials I stole.

Perhaps log directly into the computer of the victim whose credentials I stole and launch the attacks from there. When all is said and done and my attack is over, simply shut down my laptop, plop out the copy of Backtrack [6] from my DVD drive and split it into pieces dumping it piecemeal across the city. So much for evidence.

The fix for these types of problems (rampant and redundant connectivity) is not an easy one in fact, there likely is no fix for the foreseeable future. Besides, you can never fix a social problem with technology. “How do you fix stupid?

Most wireless networking equipment can be configured with stronger encryption and authentication mechanisms (TKIP, WPA2, etc.) however the likelihood of getting someone to fix their network is low. One can state: “We can pass laws to force them to fix this” but how would you enforce them?

Perhaps create the Department of Open Wireless Network Enforcement? “DOWNE” is actually a cool looking acronym, but I do not want my tax dollars spent funding it. I go back and state: “You can't fix stupid.

From the analytical perspective now, what information have I gathered from an attack? I see an IP, I see an attack. I have nothing more than that. I may infer some form of motive behind the attack, for example: as an engineer analyzing network traffic in a financial institution, I see an IP address performing Cross Site Scripting attack queries on the institution's servers. T

here is a high likelihood that that the attacker is after something specific with the underlying goal usually leading to money. Any percent attributed to this statement would be outright false.

It would not merit any objective number from any scientific study: “the percent of attackers...Fail. It would be a waste of time. How realistic would a “percent” statement be at the end of the day. What concrete evidence do you have?

Let us have an alternative point of view now, from the systems slash network administration and engineering perspective. How many of you reading this have ever fat fingered an address? If you haven't, then it is likely true that you have not been an engineer or an admin very long.

Imagine that as an engineer you have to diagnose an address of 10.10.10.5 on your network. You quickly type the address into a command line utility as this is an e-commerce server that needed to be online yesterday:

-bash2-2.05b$ ping 10.10.1.5
PING 10.10.1.5 (10.10.1.5): 56 data bytes
^C
--- 10.10.1.5 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss


That is definitely not a typo from the writing perspective but it is a typo from the engineering perspective. At no point in time did I mean to type 10.10.1.5 however, I did type just that. Was I attacking this machine? Absolutely not. Now imagine if I had tried to ssh into this machine:

# ssh 10.10.1.5
The authenticity of host '10.10.1.5 (10.10.1.5)' can't be established.
RSA key fingerprint is 14:be:bc:ca:ed:1b:64:3d:86:ba:4e:61:44:cd:d2:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.1.5' (RSA) to the list of known hosts.

innocentmistake@10.10.1.5's password:
Permission denied, please try again.

innocentmistake@10.10.1.5's password:
Permission denied, please try again.

innocentmistake@10.10.1.5's password:
Permission denied (publickey,gssapi-with-mic,password).
This e-mail address is being protected from spambots. You need JavaScript enabled to view it This e-mail address is being protected from spambots. You need JavaScript enabled to view it This e-mail address is being protected from spambots. You need JavaScript enabled to view it


After the denials, I realize the mistake and correct it. From the endpoint perspective (the admins at the address of the typo I made), they have no inkling that what had occurred was human error and not an attack against their systems. Those admins if on a Linux machine will see:

-bash-3.00# tail -n 4 /var/log/secure
Mar 10 09:15:32 bankingmachine sshd[23089]: Failed password for innocentmistake from ::ffff:10.10.10.13 port 58374 ssh2
Mar 10 09:15:35 bankingmachine sshd[23097]: Failed password for innocentmistake from ::ffff:10.10.10.13 port 58559 ssh2
Mar 10 09:15:38 bankingmachine sshd[23105]: Failed password for innocentmistake from ::ffff:10.10.10.13 port 58740 ssh2
Mar 10 09:15:41 bankingmachine sshd[23110]: Failed password for innocentmistake from ::ffff:10.10.10.13 port 58919 ssh2

Quite easy for alarms to ring wouldn't you think. However, any investigation into this occurrence would be a waste of time, the solution would be for the bank to block this everything else from reaching SSH and solely allow what needs to be allowed in. The kicker to this bit of common sense is that many networks keep doing it wrong.

With so much documentation on sites like NIST, NSA, SANS and countless other security websites, it is amazing to see how organizations keep failing. Administrators and engineers can and should take an altogether different approach to security.

We as engineers and administrators may never be able to stop random attackers from knocking on our door, we can however stop answering the door. This is another failure from the security management level right on down to the engineering level.

Imagine the following topology:

Company A
Network 10.20.30.0/24
E-Commerce Server 10.20.30.5
E-mail Server 10.20.30.6

A simple and small network with one E-Commerce Server and an E-Mail server. What is the purpose of the E-Commerce server? To perhaps serve traffic to anyone looking at the webserver. Maybe it is someone who wants to check their account or sign up for a new account. This is all the machine was configured to do.

Should someone from the outside world be connecting to say SSH? No. Can we stop them from connecting to ssh? No. Stop for a moment and think about that statement. Sure you can create a rule to drop or reject them but at no point in time will the connections stop coming in. Don't fool yourself. Can we block them with a myriad of firewall rules? Sure but that would be impractical. How do we defend this machine you ask? How about you stop worrying about who the attacker is, what it is they are doing and worry about your server doing what it was deployed to do.

We know that no one should be connecting via services we are not offering (services meaning applications) yet it would be impractical to create hundreds of thousands of rules. Simplicity is the answer here. Block your own server from ever connecting “to” anyone else on services you are not offering. For example on a Linux box, I could create one all inclusive block to my server from connecting to anyone else via SSH.

iptables -A OUTPUT -s 10.20.30.5 -p tcp --dport 1:1024 -j DROP


Simple. “Hey firewall, my address is 10.20.30.5 and any time I try to send something out of some important port, you need to drop it. Don't allow it to happen.” Imagine that. One rule versus thousands. This rule alongside the block in rule. The approach is altogether different here.

Many on the enterprise scale would cry foul and someone can come back and argue about tunneling through HTTP however, when configured properly, nothing is stopping me as an engineer and or security professional, from taking a look at what MY MACHINES are doing.

My goal is to block my machine from initiating connections outbound on what I have labeled as “important services.” This portion is always under my control, not the connections of others trying to come in the door, they will always keep trying.

In ending this little rambling Richard Bejtlich has written an excellent book on the entire subject: “Extrusion Detection.” [7] So go back and ask yourself: “why are you worried about who is knocking on your door if you cannot see them,” the whole scenario is out of your control. Furthermore, why even waste your resources in answering that door if by now you know there is no one there?

Besides, even if you did see something, the reality is you will never know who they are anyway. The effort and resources are better spent investigating WHY your machine is initiating connections to the island of Footopia at 3AM. Don't worry too much about what is coming to your door, worry more about what is exiting your house. This is not to say don't ever wonder who is knocking on your door, it is merely to say: “don't stress it that much" you have more important things to worry about.


[1] http://www.amazon.com/Intelligence-Analysis-Environments-Security-International/dp/0313382654
[2] http://www.msnbc.msn.com/id/22770682/ns/technology_and_science-security/
[3] http://www.merit.edu/mail.archives/nanog/2008-01/msg00728.html
[4] http://www.alobbs.com/macchanger/
[5] http://karmetasploit.com/
[6] http://www.backtrack-linux.org/
[7] http://www.amazon.com/Extrusion-Detection-Security-Monitoring-Intrusions/dp/0321349962

Cross-posted from Infiltrated

Possibly Related Articles:
6104
Network->General
Attacks Network Security Cyber Warfare Intelligence Attribution Analysis Decomposition
Post Rating I Like this!
6a30e32e56fce5cf381895dfe6ca7b6f
Jimi Thompson Several things are in play. One is having it on and open 24\7 means stuff works without a lot of thought. Two management is rarely happy with anything it related that is broken. Three most companies want the walmart approach - we don't care how crappy it is as long as its supercheap. Securing things properly requires skill and knowledge most are unwilling to pay for. Four few if any are willing to listen until they are the ones who have been hacked and something drastic like the ceo's golden parachute agreement gets mailed to staffers facing layoffs.
1300973669
850c7a8a30fa40cf01a9db756b49155a
J. Oquendo Jimi, I agree with most of your points. I'm hoping to promote sort of a "think different" approach to security via information/electronic warfare from the attackers perspective. My hopes, allow others in the industry to see where there are failures in hopes we can collectively fix them.

To your points now:

1) Always on is not wrong however, always on misconfigured is outright dangerous. We need to stop allowing bad checks and balances. The old CYA (cover your a..) approach will always fail with security. From an engineering security perspective, I would rather keep it "always on as long as its been audited no differently than an attacker would give it a go" than keeping it on for the sake of visibility.

2) As engineers and "underlyings" I think managers need to start listening to their staff more. Here is an example: While going through an SOW for a client, I noted what tools, techniques and tactics I would use in an attempt to compromise them via a penetration test. An initial comment was: "no damn way... that *might* break something" which is common. I had to explain to them in "deer in headlight fashion," "at what point do you think an attacker would care about whether he disaffected your service because of an attack tool or method used?" I had to get them to understand that "staged" vanilla vulnerability scanning is nothing more than shoveling money into an open hole. I also had to establish a trust where I made them aware that I would do my best responsibly, but would like to perform a real world test. Dancing went on for a little while (dancing with words that is) and they got it. However, I had to speak up loud and clear.

3) Cheaper is not always better. Cheaper is band-aid applied to a wound that really needs stitches. The issue with costs is that it is ALWAYS going to be cheaper to implement security controls than the *one* time of a compromise.

4) I blame this on security managers who are too caught up in voodoo metrics that make sense on the CYA papers (here is your pie chart risk analysis). You cannot tangibly measure an attack nor the cost of one. You can throw up numbers, eventually one will stick.
1300974658
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.