 HubPages»
 Technology»
 Internet & the Web
Quantum Key exchange and the BB84 protocol
Q.M. through examining position and momentum.
There are two kinds of physics (sort of).
Newton described the movements of the planets and other large objects so incredibly well, that we now have Newton's laws of motion. For an idea to become a law in physics, it has to pass every practical test, theoretical test, make predictions and basically, be a rock solid observation which works in a general way. While the term generality in mathematics means 'always'. In physics, it almost means always. For example, a law of physics might have a realm of validity, outside of which it may be sidelined or augmented. Laws are almost always simple and possess or describe symmetry. A law notes that something happens, a theory explains why and how something happens.
Physics based on Newton's laws, and other largescale observations in the context of this article will be referred to as Classical systems.
Of course, there is only one overall physics, but the tiny little things in this world don't obey classical physics. For this, we need to extend the classical laws and invoke quantum mechanics, quantum electrodynamics, and all the mind numbing weird stuff that you, that table, this cup, and that chewing gum is really made from. It just so happens that when things get a little physically bigger, these quantum effects are insignificant. There is no hardboundary at which this happens. Quantum mechanics describes uncertainty and random behavior but as more objects are collected in a system, and more measurements taken, the average of the things they do start to line up with classical equations with less error. A single photon or electron is only a quantum mechanical object, but when you analyze what a bunch of them do, then the average of the fuzzy predictions of quantum mechanics slides gracefully and smoothly into the macroscopic world as the numbers in the system increase.
We will talk of waveparticle duality, but do so with caution because these are descriptions which belong the realm of validity in macroscopic experience and have little place in quantum mechanics. We effectively need new language for quantum mechanics and should ignore talk of particles and waves and orbits and pointparticles. This would, in fact make the subject much less confusing. The only language that we have which removes confusion is mathematical, but luckily it is simple compared with the complex dynamical classical systems that we are capable of analyzing.
Traditional cryptography
Traditional cryptography uses classical physics and objects of classical size to obfuscate information. For a deeper discussion of some fundamentals of security, please see my discussion of CIA.
In traditional security, there is a simple way to encrypt data. All you need to do is find a reliable way to produce a random stream of data, and mix it with your own data. Unfortunately there are two issues with this.
 It is difficult to produce a high quality set of random data at speed.
 This random data (a key) must be distributed among the parties allowed to decrypt the data. Perfect security demands the key is as long as the message data. If you have a secure channel to distribute the key, then you may as well use it for the message.
The problem of key distribution has been mostly solved since the mid 1970s when two independent research efforts invented asymmetric encryption. The first to be publicly published was by Whitfield Diffie and Martin Hellman in 1976, but it was first invented within GCHQ, by Malcolm J. Williamson who was forbidden by the official secrets act to make it public. It was finally made public in 2002. Both techniques create two keys, one of which is made private, the other is public. Once someone encrypts a message with a person's public key, then only the owner of the corresponding public key can decrypt it. If someone encrypts a message with their private key, then anyone can use the correspondning public key to decrypt it. This proves that the only person that could have sent the message is the person who made the public key available. This nonrepudiation relies on a strong authenticated binding with the public key and the person who owns the secret key. The fact that one key is used to encrypt while the other key is required to decrypt describes asymmetric encryption.
Asymmetric encryption is a complex system when fully implemented into a Public Key Infrastructure because the authenticated binding between the public key and the person is difficult to implement. It does work well and today enjoys a relatively limited implementation however, it's complexity and cost has hampered widescale implementation. While financial transactions should use full PKI, many administrators use a preshared key for point to point requirements  like remoteaccess into a firewall administration account, or a business link.
The preshared key suffers from the key distribution problem. If anyone copies that key while it is in transit from A to B, then subsequent transactions are at risk. Furthermore, because the key is a classical construct, the adversary M can copy the key without leaving any evidence.
It is the ability to secretly copy a key which is thwarted by quantum key distribution.
A, B and M
In cryptography discussions, we often talk about getting a message from A to B. To make this more fun, it's common to refer to A as Alice, and to B as Bob.
Therefore, Alice desires to send a message in secret to Bob. But someone trying to intercept and even alter that message needs a name too! We shall call him Mallory.
Mallory is the maninthemiddle. He is a bit of a crossdresser apparently. Sometimes he looks like Alice as far as Bob is concerned, and looks like Bob to Alice. When Mallory can impersonate Alice and Bob, he is performing a Maninthemiddleattack.
BoseEinstein condensate
Quantum properties.
There are many iddybitty sized things in physics which are subject to quantum effects. The most commonly discussed seem to be photons, electrons and mesons. However, some macro phenomena are also observed in, for example BoseEinstein condensation. We will concentrate on the photon because this is what is used in a fiberoptic communication medium.
A photon is a disturbance in an electromagnetic field. If you study just one of them, it behaves like a wave, but also like a particle. As a particle, it has some interesting measurable properties. It has something called spin with a spin value of 1. It has a wavelength called the De Broglie wavelength, and it has no restmass, but it is influenced by gravity because of its relativistic mass. It travels only at one speed (c). We have machines that can detect a single photon. Polarising filters and beam splitters can find polarization, and when that measurement is made, the photon's state is changed in a probabilistic way.
We can do quantum encryption using several kinds of quantum properties, but in this article, the photon's polarizationmeasurement is used. I will try to avoid saying that a photon 'has' a particular polarization, because when you measure polarisation, it can be either blocked, unchanged or messed up depending on how it is measured. However you decide to measure it, that measurement alters the outcome of a following measurement.
Individual photons must always travel at c. This is well known for freespace but even in a medium each photon travels at c, however, a waveanalysis shows they suffer a phase shift when they interact with atoms. This changes their wavelength and momentum, but not speed. Sometimes a photon is absorbed, and a new one is emitted.
The wavelength of light is related to its energy, and different wavelengths take different paths through a medium. This causes dispersion. There is a fiber optic medium called 'Single mode'. This is what is used for quantum key distribution as opposed to multimode fiber because its causes less dispersion. A pure quantum state needs to be preserved to send a bit of information from A to B. Dispersion upsets this state (causes decoherence) and introduces errors in the result.
Some notation.
In texts involving photons, and other quantum particles, you will see some strange symbols. It's worth describing the Ket because many texts use the symbol when describing quantum systems.
The Dirac BraKet notation is a concise and convenient way to describe quantum states. Think of a stylized arrow as in the figure, but it's not a vector in the traditional sense because it is really about a vectorspace. The arrow represents a quantum state. But you need to understand that this arrow of unit length has n degrees of freedom. Its tail is pinned to the origin, but the point is free to move. For n=3, the Ket lives in a sphere and the basis states are along the x,y and z axis. We see it as a snapshot above but you need to have in the back of your mind that it represents (all at once) any construction that can be made from the basis states. In two dimensions, the arrow is free to move in a plane  like on a piece of paper in which case it would touch all points of a unit circle. The bases in two dimensions are like the familiar xy graph axes at right angles to each other. In three dimensions, the bases are xy and z where z is perpendicular to the xy plane... like three joiningedges of a cube. Now imagine, that along the bases are three beads, one on each. No matter where the arrowhead lies, you can drop (project) three lines from the tip to each basis.
Our minds are conditioned in a 3D world. We can easily imagine and make physical models of three dimensional constructions. But a 4D object defies visualization. Quantum Mechanics and many other disciplines need more than three dimensions, and these are modeled with row vectors (the so called bra), column vectors (the ket), and observables as matrices. The Bra and Ket are convenient ways to manipulate matrices. For our purpose, we don't need to describe the Bra, but you can think of it as a reflected Ket because it is the space of complex conjugates to the ket. Together, these are pronounced "Bracket" notation.
For Quantum key distribution, we can use just two degrees of freedom. Then the basis is in xy coordinates, and those beads can each slide to any point along the basis as long as the arrow still touches the inside of the sphere in 3D and the circle in 2D. The bead's positions are nicely described in 2D by a complex number which looks like this:
a + ib
Symbolically, in print, a Ket arrow is drawn using a bar and a something like a greaterthan sign...
>
But to avoid a little confusion with the meaning "greater than", it has a shallower angle:
 ⟩
We could apply a value  an amplitude  to this arrow like this:
 ⟩
or/and rotate it.
But that is inconvenient. So we may as well just use a complex modifier (see below).
The Ket vector can describe the state of a quantum thing, and we name the particular quantum thing by inserting a symbol between the bar and the arrowhead. It's a way of labeling the arrow.
fα ⟩
You would pronounce this as "f Ket alpha".
For the record, the notation <ab> implies a row vector a, a column vector b, and results in a single value after taking the innerproduct of a and b.
In general, we need to weight the Ket in a flexible way and use a complex number to describe those projections on to each basis for any particular state within all the possible states the Ket allows.
cα ⟩
c is a complex number, it weights the Ket and we call it a complex scalar.
In the complex number c, as in the example a + ib, the a and b are vectors at right angles to each other (orthonormal), and the vector that it you get by vector addition for a right angle like this is the hypotenuse of a triangle. From Pythagoras, we get:
h^{2}=a^{2}+b^{2}
and h^{2} for our systems under description is a statistical measure of probability. This is the famous "square of the amplitude" that Richard Feynman talked of in his popular lectures on quantum electrodynamics.
What we have described is a Hilbert space. It's the place where the Ket lives.
In quantum mechanics, where energy is quantized, we are dealing with two probabilities. Either it exists or it does not. So we must make h^{2} = 1 or zero. There are no intermediate outcomes to any experiment, but there are an infinite number of ways to prepare (orient) the initial experiment. This is very different to any classical experiment.
Now the system that we are studying is what is called orthonormal.
The reason we do this is because of all the orientations of a Ket in Hilbert space, each one has a certain probability to be in that orientation after a measurement, and all these probabilities when added together must equal 1.
Incidentally, if we needed to work in N dimensional space, it's impossible to visualize, but the mathematics is easily extended by using bigger vectors. Here is the equivalent to a diagonal in 4D and 8D:
h^{2} = a^{2} + b^{2} + c^{2} + d^{2}
h2 = a^{2} + b^{2} + c^{2} + d^{2} + e^{2} + f^{2} + g^{2} + h^{2}
The following is also acceptable as you can use a symbol or a word to label the ket:
thing ⟩
and you would say "thingket", and you can also use letters, arrows, squiggles etc to label any particular quantum thing.
For quantum key exchange, we are interested in describing binary encoded on a quantum property, at different times over two different bases: A rectilinear basis ⊕ and a diagonal basis ⊗.
Sometimes, we wish to show an ordered measurement. Let's say there are two possible states like U and D. Then we can write
U1,D2 ⟩
and we could perform some operation with this on another that looks like this:
D1,U2 ⟩
The subscripts are linking each state so we can read it as: "If U1 is true, then D1 is true. This is useful when describing something called entangled pairs. (More later).
Mapping quantum states in different bases onto a boolean value.
Basis and bits.
HorizontalVertical (rectilinear) basis may be represented like this: ⊕
A basis at the ±45 degree angle may be represented like this: ⊗
The figure to the right gives a few examples of encoding quantum states as binary. Here is an explanation for each of the examples:
 Rectilinear basis : a horizontally polarized state maps onto boolean 0
 Rectilinear basis : a vertically polarized state maps onto boolean 1
 Diagonal basis : 135^{0} polarized state maps onto boolean 1
 Diagonal basis : 45^{0} polarized state maps onto boolean 0
 Since magnitude does not matter for a ket, adding these leaves it unchanged.
 As 5. above.
 This is a state of superposition. It is not a function because the outcome is either a 1 or a 0, not both, and with a 50:50 probability of one or the other.
 As 7. above.
A zero and a one bit in either basis is represented thus:
0⟩
1⟩
We can therefore use both kinds of polarization and one and zero to get four possible states.
⊕:0⟩
⊕:1⟩
⊗:0⟩
⊗:1⟩
qubits
A qubit is a unit vector in a two dimensional complex vector space with fixed basis.
Orthonormal basis 0⟩ and 1⟩ may correspond to vertical polarization ↑⟩ and horizontal polarization →⟩ .
The basis states 0⟩ and 1⟩ are used to encode the binary states 0 and 1.
A fundamental property of the quantum key exchange is how a single pure quantum state will become in a state of supersposition if it is measured using any other basis.
1⟩ is a 'pure' state. If it is of basis ↑ and you measure it with rectilinear basis ↑, then the state is unchanged. But measuring it with rectilinear basis → gives 0. You can intuitively see that somehow, the probability of measuring a 1 ranges from zero to certainty as the basis orientation of the measurement basis compared to the basis at creation, rotates from 0 degrees to 90 degrees. Exactly in the middle of that at 45%, as you would expect, the probability of measuring 1 or 0 is exactly 1/2. What happens at other angles is just trigonometry, but we don't need to explore that to describe the BB84 protocol.
If 1⟩ of basis ↑ is measured with a diagonal basis, then the output is ( 0⟩ + 1⟩ )/√2. This is now in superposition. The expression says that there is a 50:50 chance that you will measure a binary 0 compared to a binary 1 because the ratio of the amplitudes of 0⟩ and 1⟩ is unity (each have amplitude 1/√2).
When we do quantum key exchange, the original basis encoding is unknown to the recipient. So the recipient B has to guess. This means, on average, 1/2 of the measurements of the key's bits are inconclusive. Furthermore, those measurements produce a quantum state that has no historical information about its premeasured state. This means that an interceptor cannot measure all the bits and reconstruct the original. This is called the "no cloning theorem". It would seem that this property also bars B from learning what A actually sent  but we can use an unencrypted but authenticated sidechannel to get around that.
Quantum states cannot be reliably copied.
It is impossible to copy quantum states reliably and leave the original in the same state. This is called the No Cloaning Theorem (Wootters, Zurek, and Dieks in 1982). If this was possible, then some solid physics would come crashing down:
 You could send signals faster than light. See my discussion on this for more information.
 Heisenberg's uncertainty principle would be violated.
These are very good reasons to trust the No Cloaning Theorem.
We have a practical issue to resolve.
No communication channel is free of noise, and a channel used for quantum communication is no exception. The presence of noise limits the speed and accuracy that information can be moved. Additionally, quantum states have to be pure when they are created, and pure when they are received. If any physical interaction disturbs the pure state, this ruins the chances of detecting the original state at the target end. This is called decoherence.
In everyday classical communication, we get around the problems that noise introduces by sacrificing some of the bandwidth for extra information that would be redundant in a clean channel, but is useful for making corrections in a noisy channel. The use of error correction prevents costly retransmissions. In general a very noisy channel benefits by using strong error correction, but in a clean channel this is a waste of bandwidth, so as is common, engineers strike a compromise. The definitive reference on this topics is A Mathematical Theory of Communication  C. E. Shannon.
Classical error correction is not possible for a quantum exchange of information because of the No Cloaning Theorem. This put the brakes on development of QKD for a long time until a method for quantum error correction was invented (Shor and Steane, 1995). So you could say that QKD is provably secure because of the No Cloaning Theorem, but made practical because of the invention of quantum error correction.
We shall see later that practical considerations suggest caution regarding the viability of implementing QKD securely. One hint is, "Although the No Cloaning Theorem prohibits perfect copying of quantum states, imperfect copying is possible."
Something special happens when photons pass through a polarizing filter.
If you pass scattered light through a polarizing filter, some rays are blocked, and some are passed. Here is a nice interactive applet on the internet which you can play with. You can also do this experiment with a pair of polarizing sunglasses, and an LCD display. If the sunglasses and the LCD display are both polarized, then there is an orientation where the LCD readout cannot be seen. This is because the light reflected from the LCD readout through the filter will be polarized into one plane, and when the sunglasses' polarization is at 90^{0} to the LCD display, no light is passed.
It is less often appreciated and rather surprising, that you can take this arrangement where all the light is blocked with two filters, and put yet another polarizing filter in between and light will pass. In this setup, 50% of the light gets all the way through the three filters when the middle filter is oriented at 45^{0}!
⊕,0⟩
through a vertically polarized filter will be blocked.
⊕,1⟩
through a vertically polarized filter will be passed unchanged.
Lets use ↑ to represent a vertically polarized filter, → for a horizontally polarized filter, ⁄ for a 45^{0} oriented polarized filter. Here are some outcomes of various experiments: Assume ordinary light enters the filter from the left.
→ ⊕,0⟩ → 0⟩ = 0
↑ ⊕,1⟩ ↑ 1⟩ = 1
↑ ⊕,1⟩ → = null
→ ⊕,0⟩ ↑ = null

→ ⊕,0⟩ ⁄ (1/√2) 0⟩ + (1/√2) 1⟩ = ??
1. reads: Ordinary light enters a horizontal polarizing filter, produces Ket 0 in the rectilinear basis, is filtered again by a horizontal polarizing filter and passes unchanged, interpreted as a binary 0.
2. reads: Ordinary light enters a vertical polarizing filter, produces Ket 1 in the rectilinear basis, is filtered again by a vertical polarizing filter and passes unchanged, interpreted as a binary 1.
3. reads: Ordinary light enters a vertical polarizing filter, produces Ket 1 in the rectilinear basis, is filtered again by a horizontal filter and the result is null  that is the photons are not passed.
4. is analogous to 3.
5. It says, Ordinary light enters a horizontal polarizing filter, produces Ket 0 in the rectilinear basis, is then passed through a diagonal filter, and produces (in the rectilinear basis) Ket {0,1} with amplitude 1/√2. The outcome is useless as a binary encoding because sometimes it's zero, and sometimes it's one at random.
In classical physics this would be a bit like throwing a football at a chickenwire fence and finding that it might come out the other side.
But in the quantum world, this sort of thing really happens, and (1/√2) 0⟩ + (1/√2) 1⟩ implies that the 0 and 1 are equally likely as it is the ratio of the weighted modifiers on each ket which assigns its relative probability. There is a 50:50 chance of getting one or the other and no way to predict apriori what will happen.
Where does the 1/√2 come from ?
We can do this intuitively. You don't get something for nothing. (Conservation of energy). If there is a 50:50 chance that to get ⊕,0⟩ and ⊕,1⟩ then the energy must be shared between them, and their combined value must be 1.
1 = √( (1/√2)^{2} +(1/√2)^{2} )
This is for a right angle triangle of angle 45^{0}. (The rotational angle between the rectilinear and diagonal basis).
Now some magic.
Recall
↑ ⊕,1⟩ → = null
If we place a third filter / in between:
↑ ⊕,1⟩ / → = (1/2) 0⟩ + (1/2) 1⟩
Putting another filter into a system that already blocked the photons suddenly allows photons through again! Once more, the outcome is indeterminate as far as assigning a binary value.
The 1/2 comes from the intermediate stage in the same way as before but this time the hypotenuse is 1/√2 :
1/√2 = √( (1/2)^{2} +(1/2)^{2} )
It's the ratio of the two weightings on the kets which are important. Since they are the same, the ratio is 1:1 which means they each have the same probability.
Perfect authentication?
The WegmanCarter (type) authentication.
Quantum Key Distribution relies on an authenticated classical sidechannel. An eavesdropper can read the data, but the channel must be authenticated. This means Alice must be sure that Bob is reading and writing the channel and vice versa. With any authentication scheme, it is possible for the eavesdropper to pick a random token and hope that it is the same as used by Alice and Bob. The WegmanCarter authentication makes the probability of this arbitrarily small.
NIST high speed quantum key distribution
HAR 2009: How we eavesdropped 100% of a quantum cryptographic key 1/6
Part 2
Part 3
Part 4
Part 5
Part 6/6
Parity.
Parity is a very crude checksum. In the mathematical sense, it is just to determine if a number is even or odd which in the case of a binary number is equivalent to the state of the least significant bit. But in telecommunications, we run through all the bits to decide parity.
The following XOR truth table is applied:
00 := 0
01 := 1
10 := 1
11 := 0
Note that pairs of like bits yield zero.
The parity of the following
110010101010
may be computed:
11 00 10 10 10 10
0 0 1 1 1 1 00 11 11
0 0 0 0 00
0 0 => 0
The parity is 0. Now lets change any single bit:
11 00 10 11 10 10 0 0 1 0 1 1 00 10 11 0 1 0 0 10 0 1 => 1
The parity detects any single bit error and in fact the presence of an odd number of errors. It does not apply error correction, and it will not detect two bit errors, or any even number of errors. Despite these limitations, it is a very quick test and has significant uses.
The BB84 protocol
At the very start, Alice and Bob need to share a small secret key. This key is initially installed with the system and of course the key must be protected and truly secret. To do this, a trusted courier is used to transport the keys to each site. Since this is a one off task, and the key size is small, then it is feasible to do this. Quantum Key Exchange grows this key and preserves secrecy. Alice and Bob also need a good source of true random number generators.
Alice and Bob sacrifice and destroy a small part of the initial secret key and are able to replace the lost part with a longer and also secure string. In this way, the initial secret grows in length.
When Alice sends a string of random qubitencoded digits to Bob, Bob does not know Alice's choice of basis  rectilinear or diagonal. Bob independently and randomly chooses a bases for measuring each bit. The quantum properties that we stated earlier ensure that on average, half the bits transferred are created and then measured with the same basis at both A and B, while the others are thrown away. Obviously, it's tough to send a message using just this method because only 1/2 the information gets through. But we don't need to transfer a specific sequence, all we need is to end up with two sequences of random bits which exist only at A and B and nowhere else.
During transmission, Mallory could select a small number of qubits and measure them. As long as he stays below the errorrate of the quantum channel, Alice and Bob will not become suspicious. At the end of the key transmission, Mallory will have on average correctly guessed a basis for only 1/2 of the bits he stole. He can however watch Alice and Bob discuss which bits where measured with which basis and whether that bit is part of the secret key. This is why Alice and Bob wait until the end of the key transmission before sifting. If they did this for each qubit as it was sent, then Mallory would know the correct basis for all the bits he stole.
The classical channel is used to sift the information and throw away those bits that are measured in the incorrect basis. The throwaway is therefore expected to be close to 50%.
If Mallory tries to intercept this transfer, he has some serious problems. If Alice is sending random digits to Bob, and Mallory tries to impersonate, he will initially succeed since Alice does not know what Bob will choose as the bases. But Mallory has no way to reliably clone the bits that he measures, and he would need to try to clone as many bits that would be typically correctly transmitted by the imperfect quantum channel. When Bob gets a forgery from Mallory, the error rate is potentially another 50% after the attempt at cloning. This means that Alice and Bob will only be able to agree on about 25% of the bits. Alice and Bob can easily detect the interception and take action. They can do this in full view over a public network without loss of security.
Practical equipment has to cope with quantum noise and a certain amount of acceptable error. The quantum channel is not assumed to be perfect, and Bob may not even receive some bits. Bob may have incorrectly measured some bits, and some might have lost coherency down the channel. Alice and Bob can sift through the data and agree on which bits should be measurable correctly by Bob, and find out those which are not received. What remains are are either correctly measured, or errors in measurement.
Information reconciliation
Once Alice and Bob each have a key, they would like to correct the errors. Every time they compare bits, Mallory could learn more about the secret key. The cascade protocol  BS94 minimizes this exposure.
If you have ever played the numberguessing game, the principle used is very similar. This is how the number guessing games works:
 I am thinking of a number between 1 and 100.
 You ask "is it more or less than 50"?
 I say, "More". (If I say neither, then you know the answer).
 You ask "is is more or less than 75"?
 I say, "More".
 You ask, "Is it more or less than 87?
 I say, "Less." You now know the number is between 75 and 87...
 You ask, "Is it more than 81?"
 I say, "Less".
In only four queries, the search space has reduced to six. This is much faster than starting at 1 and comparing each number, and it exposes less information to an eavesdropper.
Alice and Bob can do this with a binary key. Alice and Bob publicly agree on a randomization of their copy of the key. This is done to distribute errors that might have been clumped. The keys are grouped into blocks. Alice and Bob compute the parity for each block. If they match, then they move on to the next block. When parity differs, they perform the binary search illustrated above to home in on, and correct the error just by Alice or Bob (but not both) flipping the bit that is wrong. They discard one of the bits in the block  it does not matter which one but typically they decide on the last bit on the block. By discarding one of the bits, any information that Mallory has gained by seeing the parity bit is now lost. They record which blocks had errors and repeat the process until no errors are found.This terminates one round.
Alice and Bob apply random reordering to their keys such that both keys end up in the same permutation. This redistributes any remaining errors to maximize the chance that they are found as single bit errors in each block.
By the application of the previous round, there are fewer errors. So Alice and Bob can increase the block size. This is acceptable because if there are fewer errors, then most blocks will pass parity check, leaving only a few blocks to search. There is little point in using a small block size when there are few errors.
When the algorithm terminates:
 Alice and Bob know statistically how likely their keys match. This means they have done enough rounds to be very sure that there are no more errors.
 Alice and Bob know how much information has been revealed to Mallory.
Privacy Amplification  simplified example
Information reconciliation minimizes the errors in the digits between Alice and Bob so they are highly confident that the key is identical at each end; but this process could have given Mallory some extra information. Some of the errors might have been caused by Mallory sampling some of the bits. Mallory gains a statistical advantage by noting the parity of blocks used during reconciliation. Some errors will be due to natural decoherence. No matter the source of error, Alice and Bob can calculate how much information Mallory might have gained.
Privacy amplification is a way to distill a smaller but more secure key from the existing key that Alice and Bob have. The tools to do this vary, but we will start with a simple example and then describe a more robust method.
Let's assume that Alice and Bob think Mallory could know 3 bits, and might know two more from statistical analysis of parity checking. He would need to guess the rest.
(A,B) = {0101110010010100110101111010} (M) = {1101110111001111001110101011}
In the above, Mallory knows the first three bold bits, and statistically tried to guess the last two bold bits. The rest are pure guesses and providing the original key is a good random source, then his best guesses will reliably be no better than 50:50.
The following method will shrink the key but also shrink Mallory's knowledge of the key:
Alice and Bob might decide to pair their random bits and add each element of each block into a onebit register like this:
(A,B) = {0101 1100 1001 0100 1101 0111 1010} (A,B)'= {0 0 0 1 1 1 0 }
Mallory watches this crude hashing and applies the same:
(M) = {1101 1101 1100 1111 0011 1010 1011}
(M)'= {1 1 0 0 0 0 1 }
In this case, all but one bit of Mallory's version of the key is now incorrect. On average his key would be no better than a random guess.
This simple hash is insufficient for general use because if Alice and Bob use it repeatedly, there will be some statistical correlation from key to key which could, over time allow Mallory to make better guesses.
Another evasive action that Alice and Bob can take is to abort the particular key exchange and start a new one. Obviously Mallory could affect a sophisticated denial of service attack on Alice and Bob, but it would be easier to just cut the cable or put something in the path of a freespace transfer so QKE's defense against a DoS attack is not particularly different to many other systems in that regard.
A universal hash function.
Cardinality
Given a set of things, like M:{a,5,t,f,dog,cat,w,3}, the cardinality is how many elements are in the set. For this set M, cardinality is written M=8. Don't confuse with the same notation used for absolute value. You need to read it in context.
What is a universal hash function?
Please take a look at another article to find out in detail what is a cryptographic hash function. We can then proceed to define a universal hash function. Any hash function takes the elements of a set M and applies an algorithm h to each element in M it to give a new set H.
h(M)→H
The characteristics of h vary strongly depending on the algorithm. A universal hash function does not have the required attributes to be used as a cryptographic hash, but it has certain features that are useful for privacy amplification.
Usually, M > H since the main aim is to take a message and map it to a value for use as a database index, or as a checksum or cryptographic fingerprint. In our case, we will use a hash to get a smaller but more secure key.
For any two independent elements, x_{1} and x_{2}, which are both found in M, if they both give the same hashed value, then this is a collision. An example is a post office box that maps two separate and independent people's addresses to the same post office box number.
In cryptography (and the post office!), collisions are bad. It's not desirable because Mallory can use a collision to change the content of a message and fool algorithms which rely on it's hash being unique. So we need to make it unreasonably difficult to find collisions.
For privacy amplification, it is useful to use a hash function which has a proven upper limit on how many collisions it can produce. It is important to know this upper bound because then we can use the feature to guarantee an arbitrarily small amount of information leakage.
In a universal hash function, the number of collisions is limited by a formula 1/N where N represents the cardinality of the hashed set. By example, lets say you have a 4bit register. This is small enough to write out all the permutations if M is constructed from the 4bit register:
M={0000,0001,0010,...1111}
In this case the largest N you would need is N = 2^{4} = 16 and for N=16, there need be no collisions because the N = the cardinality of M.
M=N.
But we need to compress M and consequently Eve's knowledge about M. This forces N < M. This will, of course cause some collisions.
It turns out that there are a great many hash functions which can constrain the number of collisions to 1/N.
Luckily, these are easy to find, and it's easy to define a function that takes random parameters and gives a new universal hash function in every case. This means that you have, at your disposal, the ability to grab a new universal hash function every time privacy amplification is performed. By doing this, Mallory cannot make use of the partial information that he might have obtained (by any method).
A universal hash function has the property of collision resistance.
modulo
When you see x mod(y) this describes clockarithmetic. A clock moves from 00:00 hours to 23:00 hours, then back to zero again. If you describe this mathematically, the notation would be h mod(24). h could be any integer, but mod(24) throws away the multiples of 24 and leaves only the remainder. 24 mod(24) is zero 25 mod(24) is 1.
Example collision resistance hashes
Here is an example of a universal hash function with primitive collision resistance:
Given x : 0011 0110 0001 0110 1100 and given a tablesize m=97
let
x_{1} = 0011
x_{2} = 0110
x_{3} = 0001
x_{4} = 0110
x_{5} = 1100
Pick integer values randomly for a_{1}, a_{2}, a_{3}, a_{4}, a_{5}
This function h(x) has the desired characteristics
h(x) = ( a_{1}x_{1}, a_{2}x_{2}, a_{3}x_{3}, a_{4}x_{4}, a_{5}x_{5} )mod m.
and so does this:
h(k) = ((ak+b)mod p)mod m
Where m is the table size, p is prime, a nd b are chosen at random, and k is the message to be hashed.
Rényi entropy
How well does privacy amplification work?
Providing that Mallory only has partial information, and Alice and Bob can know the proportion (they don't need the actual bits), then they can compress Mallory's knowledge of the key to an arbitrarily small value. Furthermore, the rate at which the compression is achieved in terms of effort for Alice and Bob is exponential (in Alice and Bob's favor).
Alice and Bob need to be able to measure entropy. The subject of entropy is fairly extensive. We shall start by making it clear what is meant by entropy in the field of information technology.
Given a random variable, you might know some of it, Let's say, given 111001, you know the first and last bits only.
This means you know 2/6 bits. The remaining 4 bits are uncertain. In fact each bit may be more or less certain than the next, so in general, a formula that measures this must take each bit individually. (In practice, you don't need to use bits, you can use symbols from any given alphabet). It is possible that Mallory is 90% sure that the first bit is a 1, and 60% sure that the second bit is a 0 and so on.
A general measure of entropy is the Rényi entropy. It has a few parameters of which one (α) suitably chosen will give special forms of the formula. One of those forms is the Shannon Entropy which is used to calculate data rates in transmission channels. Another is the collision entropy. We find the collision entropy useful for privacy amplification. In this case α = 2 in the equation shown above, and reduces to
H_{2}(X) = log P( X == Y )
Which is read, "when alpha = 2, the Rényi entropy is the collision entropy which is the negative logarithm of the probability that X is the same as Y.
References
 Australian Directors urged to go quantum
Ozzies are at the front of the pack.  Authentication in quantum key growing
Jrgen Cederlf's masters thesis in Applied Physics about the authentication step in quantum key growing, a.k.a. quantum key distribution, a.k.a. quantum cryptography, a.k.a QKG, a.k.a. QKD  Quantum Cryptography and Privacy Amplification By Sharon Goldwater
Traditional cryptosystems fall into three categories: publickey systems, privatekey systems where the key is shorter than the message, and onetime pad systems...  Key to the quantum industry Mar 1, 2007
Technology that exploits the strange rules of quantum mechanics to guarantee the security of encrypted messages is the first product of a new quantuminformation industry to reach the market, as Andrew Shields and Zhiliang Yuan explain
Comments
Please don't hack my computer :(
I understood about three or four paragraphs. Maybe not even that much. Whoa! I'll bookmark it for later when I have insomnia. Thanks.
This is a great hub, I liked how you explain the concept of how something becomes a law, so many people confuse hypothesis, laws and theories. So much so it frustrates me. You are very right when you talk about how difficult it is to discuss quantum mechanics in words, as you state the only accurate way to discuss is it is through mathematics, but that makes it inaccessible to the lay reader. I am very passionate about educating the public into scientific concepts no matter how complex and removed from reality they are. It is nice to see that you too are tackling a complex subject. In my humble opinion, you try to introduce too much in this hub it would probably be better broken down into two or more hubs. It reads well and is very thorough, I can comfortably follow. However I imagine someone looking for a level of information at the popular science level would not stay due to the sheer amount of information. Don't get me wrong, it is a SUPERB Hub, it just depends on what your target audience is. I enjoyed it very much, thank you for taking the time for tackling such and interesting and complex topic.
No problem, hope it helped, even if only a tiny bit. There is nothing wrong with it, it's just a matter of personal preference. If that's the target audience then you are about right, you introduce difficult concepts but definitely expect a certain level of knowledge as you read it. There rarely is a single site with lot's of accurate information sadly and you are right popular sources are often floored or incomplete,this is almost always due to them aiming at a layman audience. Which is what I tend to write for. I write science articles for other more specific sites and for those I mainly use publications and periodicals for my research as it is the only way of getting reliable and up to date data. I'm going to enjoy making my way through more of your articles over the next week or so.
5