WikiLeaks Lessons for IT Security

Monday, January 03, 2011

Eli Talmor



Much has been written about WikiLeaks incident. This blog looks on the lessons relevant to IT Security.

I will start with quoting Bruce Schneier – US IT Security expert:

1. Encryption isn’t the issue here. Of course the cables were encrypted, for transmission. Then they were received and decrypted, and — so it seems — put into an archive on SIPRNet, where lots of people had access to them in their unencrypted form.

2. Secrets are only as secure as the least trusted person who knows them. The more people who know a secret, the more likely it is to be made public.

3. I’m not surprised these cables were available to so many people. We know access control is hard, and it’s impossible to know beforehand what information people will need to do their jobs.

I would like to add that:

4. The issue of network complexity is paramount. While in the past, in the perimeterized networks, it was possible to seal the access and to enforce security policies – in today’s environment it becomes very difficult to police every endpoint.

5. Another problem is classifying information. At today’s technology level – we need humans to determine what is classified and what is not.

Surprisingly - the points 1 to 5 have a common denominator: Although we are looking to protect sensitive information, the way to tackle it is not to safeguard every network element but to manage people creating and using this information.

Let’s look at the flow of information that led to WikiLeaks “flop” according to .

US diplomats around the Globe sent diplomatic cables which were archived and people had access to them on a need-to-know basis determined by archive administrators.

One of these people, that had legitimate credentials to access the archive, downloaded TONS of DOCUMENTS to his RW-DVD using a loophole on his endpoint device management policy.

The purpose of this blog is to propose a way to prevent this “Black Swan Event of IT Security ” from happening. The goal is to reduce the potential damage.

Let’s modify the information flow that led to the “flop”. This modification does not require any changes to existing networks:

Any diplomat using a Government computer on a Government network is doing it for the purposes of his job description. So any document he/she generates should be classified by definition.

Any classified document should be protected from the moment it is created, any time it is at motion on any network, and any time it is stored anywhere in the world.

Immediate issue: documents are protected by encryption, but we need to store those keys in secure fashion and we need to distribute those keys in real-time to decrypt these documents.

Another problem is who reads this document – a “need to know” basis. Any person generating classified information may share it with group of people that “need to know”. Usually this group will be small in size, determined by the person generating the document.

Immediate issue: user authentication for group members must be done in real-time in most secure manner. Another immediate issue is cross-domain availability: encryption keys management and user authentication should be performed across the Globe, irrespective of infrastructure.

Obviously the type of the document cannot be restricted, since any type of information can be classified.

It should be necessary that an audit trail of information accessed by anyone be preserved, and time expiration / information revocation be built-in.

In case that information needs to be shared with people beyond the original group – to liaison with another “group”, may be done by re-distributing it, again, on a need to know basis.

The classified information may still be stored in central archive, but only in encrypted form, so that need to know limitation is preserved.

Obviously Data Loss Prevention policies need to be implemented on endpoint workstations across the Globe: Every document needs to be classified (i.e. encrypted) at generation. The document encrypted should be also “fingerprinted” to prevent distribution in “un-encrypted” form.

The following chart demonstrates SentrtCom Granular Authorization for real-time need-to-know file access enforced by DLP policies :


Does it prevent the “singularity” of individuals that needs access to vast amounts of classified information, such as analysts? It still might happen that this information may “channel” to them anyway.

Since this is a very distinctive group of people – their security requirements need to be dealt with in a VERY ELEVATED FASHION. But this may be very limited in geography and not so complicated to accomplish.


1. In centralized data-depository every end-point is created equal. Failure in one end-point (WikiLeaks “flop”) may cause catastrophic consequences.

2. In a SentryCom need-to-know peer-to-peer data sharing scheme – failure in one endpoint do not cause much damage.

3. Singular groups of people receiving vast amounts of information on a need-to-know basis needs to be handled with much greater care then today.

Cross-posted from

Possibly Related Articles:
Encryption Access Control Network Security WikiLeaks DLP Authorization
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.

Most Liked