One of the perennial problems with symmetric-key encryption has been establishing a secure channel over which to pass the secret key.

It has always been much easier to compromise the transmission of the keys than to try to find some weakness in the encryption algorithm.

Military and government organisations have put in place elaborate methods of passing secret keys: they pass secrets more generally so using similar channels to pass an encryption key is not a great leap.

However, as everyone has become more connected, and especially with the commercialisation of the Internet, encryption has become a requirement for the vast majority of networked users.

Thus, the traditional methods of passing secret keys is impractical, if only because you might not actually know who you want to communicate with in an encrypted fashion.

It was realised in the 1970’s that encryption needed to be more accessible, so a great deal of work was done on algorithms that could ensure that a key had been passed over relatively insecure channels was not compromised.

Leaders in the field were Whifield Diffie and Martin Helman. In doing this work, it was realised that there was another way of tackling the problem.

In the late 1970’s Diffie and Helman published their seminal paper entitled “New Directions in Cryptography” and public-key encryption was born.

When the implementation of the early public-key encryption methods was compared to symmetric-key encryption it was found that the public-key encryption was significantly slower and hence communication using public-key encryption alone was, at the very least, going to require far more processing power than would otherwise have been the case.

But wait! The original problem being studied was secure key transmission so why not use public-key encryption to securely transmit the symmetric-key, and then use the faster, more efficient symmetric-key encryption algorithms. In essence, that is how most so called public-key encryption works today.

The majority of public-key encryption algorithms rely upon a mathematical subtlety. One of the earliest was based upon prime numbers. If one has a number (N) that is derived from multiplying two prime numbers, and N is very large, it is practically impossible to calculate what the two constituent prime numbers were (known as factorisation).

It’s not that you can’t calculate the two prime numbers. There are many algorithms for doing so. It is that it takes such a long time to successfully compute the algorithm (sometimes thousands of years) that by the time you finish, the information you can recover is worthless. Even with the huge budgets available to governments, it is possible to reduce these timescales only marginally.

Public-key algorithms therefore are able to use one of the prime number constituents of N as part of the public-key element of the encryption process without fearing that the other might be discovered. It’s a fact of life that even with the massive increases in computing power over recent years, traditional algorithms for factorising these large numbers means public-key encryption is still quite secure.

Of course, “secure” is a relative word and there are many ways of recovering the private element that was used to derive N. These do not involve some fiendishly clever mathematical method (although a small army of mathematicians are seeking to do this) but rather simple methods such as accessing the PC that holds the private element!

As ever, the weakest link determines the strength of the chain. Having said that, there is now an emerging threat the public-key encryption which does not rely upon such trivial methods: Quantum Computing, a field first introduced by Richard Feynman in 1982. To understand why Quantum Computing poses a threat one first needs to understand a little about how it works.

In the computers that we are familiar with, processing is done using bits. A bit has two possible values: 0 and 1. A quantum computer uses qubits. A qubit can also have the values 0 and 1, but also any combination of the two simultaneously. In quantum physics this is called superposition.

So, if you have 2 qubits you can have 4 possible states, 3 qubits gives 8 possible states; and all simultaneously. In a bit-based computer you have the same number of possible states but only one exists at any one time. It is the fact that these states can exist simultaneously in a quantum computer which is both counter-intuitive and extraordinarily powerful.

These qubits are manipulated using quantum logic gates in the same way that conventional computation is done by manipulating bits using logic gates. The detailed math is not important here. Suffice to say that you can operate on multiple simultaneous states, thereby increasing the amount of computation you can undertake over what you could otherwise do in a conventional computer.

So, if you had a 2 qubit quantum computer you could theoretically compute at 4 times the speed of a 2 bit conventional computer. There are a few problems though, in translating theory into practice.

First, the algorithms developed over many years for conventional computers have been optimised for their architecture, and different algorithms are needed to run a quantum computer. Hence, trying to compare speeds of conventional and quantum computers is like comparing apples with bananas. However, one of the earliest algorithms developed for quantum computing was by Peter Shor.

Shor’s 1994 algorithm was specifically designed to factorise numbers into their prime number components. Shor’s algorithm runs on a quantum computer in polynomial time, which means that for a number N it takes only logN to successfully complete the algorithm. Even with the largest numbers in use today in public-key encryption, that means that it is perfectly feasible to factorise the number in meaningful timescale.

Since Peter Shor developed his quantum computing algorithm many others have been developed, and it is worth noting that a significant number of these algorithms are aimed at breaking the underlying mathematics that supports more recent public-key encryption. The fact that methods other than the use of prime numbers have been developed is being very quickly followed by their quantum computing algorithm counterpart.

Second, there is the much quoted, but less well understood, Heisenberg’s Uncertainty Principle. The bottom line is that this principle tells us that if you observe something then you affect it.

Hence, measuring a qubit causes it to adopt one state but that state might not have been the state resulting from the computation but could have been altered by you observing it.

So, the moment you try to measure the answer calculated by your superfast quantum computer it loses its ability to give you the correct answer.

That would seem to render quantum computing rather pointless. But, there is another quantum effect that can be employed: quantum entanglement.

This is where, if two objects have previously interacted, they stay “connected” such that by studying the state of one object you can determine the state of the other. Hence, you can determine the state of the qubit with your answer by studying another object with which it is entangled.

Again, this is counter-intuitive, but has been proven, so all one really needs to know is that there is a way of getting your answer out of your quantum computer.

Lastly, there is a small matter of implementation. Until recently this was a show stopper.

Typically quantum computers are based on light, as photons are relatively easy to work with and measure.

However, anyone who has seen an optical bench will know that they are enormous. They require great weight to prevent even the slightest vibration and they are prone to all manner of environmental factors. Hardly the small “chips” we are used to! Also, typically these implementations are aimed at running one algorithm: the algorithm is built into the design.

Having said that it wasn’t that long ago that conventional computers with far less processing power than the average PC required air conditioned luxury and small army of attendants to keep them functioning. It did not take very long for the size to shrink dramatically and for the environmental requirements to be quite relaxed.

Not surprising then that there is a company in Canada (D Wave Systems Inc) who already offer access to quantum based computing. It’s expensive and the size of a room, but we all know that won’t be the case for long.

2011 bought about some major developments which might well make 2012 the year quantum computing comes of age.

Most significant of these was by a team at Bristol University, who developed a small chip which housed a quantum computer.

It had only 2 qubits but it was of a size that could be used for the mass market, and, crucially, it was programmable.

We are now entering a new era where we have programmable, relatively inexpensive, relatively small, quantum computers visible on the horizon, and we know that such computers have the potential to undermine the mathematics upon which current public-key encryption depends.

Given all of that, maybe it’s time to reconsider the role of public-key encryption and where it might be more sensible to rely wholly on symmetric-key encryption.

*Cross-posted from Professor Alan Woodward*