This is an approximate transcription of my physics talk from Boskone, titled "Spooky Action at a Distance," in which I attempted to give a reasonable explanation of the Einstein-Podolsky-Rosen ("EPR" hereafter) paper and Bell's Theorem. This was sort of a follow-on from last year's "Weird Quantum Phenomena," meant to highlight a specific class of weird quantum phenomena.
There's some SF relevance to the ideas involved in EPR and Bell's Theorem. A number of authors have name-checked the idea, most notably Charlie Stross citing "entangled particles" as the mechanism for FTL communications in his Singularity Sky and Iron Sunrise.
The central issue here is the way that quantum mechanics allows instantaneous correlations between particles over very long distances. This is an effect that Albert Einstein, along with Boris Podolsky and Nathan Rosen, attempted to use to argue that quantum theory as usually formulated couldn't possibly be right. Some thirty years later, John Bell showed that it was possible to test this experimentally, and almost twenty years after that, Alain Aspect did a series of experiments that showed fairly conclusively that Einstein was wrong, and the world is a whole lot stranger than it looks.
There are a million different systems that people use to explain the EPR paradox and Bell's Theorem, but I'll try to stick with photon polarization throughout this discussion. The key fact that you need to understand about polarized light is that the probability of a beam of light passing through a polarizing filter depends on the angle between the polarization of the light and the axis of the polarizer. If they're aligned (vertical light hitting a vertical polarizer), there's a 100% chance of the light being passed through, and if they're at 90 degrees (horizontally polarized light hitting a vertical polarizer), the chance of transmission is zero, and in between, it's somewhere in between. At an angle of 45 degrees, the probability of transmission is 50%.
If we think about everything in terms of horizontal and vertical polarizers, we can think of light at an arbitrary polarization as a combination of vertically and horizontally polarized light. In quantum terms, we can write this as a sum of polarization states:
ψ = x ↑ + y →
(where ↑ represents vertical polarization, and → represents horizontal polarization, and x and y are numbers that give the relative probability of finding ↑ and →).
Now, you can look at this sort of superposition state as being fundamentally indeterminate. Until the instant that you measure it, the polarization is neither vertical nor horizontal, but some combination of the two. When you measure the polarization, though, you'll find one or the other, and from that point on, the polarization is whatever you found with your measurement.
One of the weird facts about this is that it's fundamentally impossible to determine exactly what the polarization state of a single photon is. That is, there's no way to measure both x and y. If you have a large number of identically prepared photons, you can measure them all, and determine the probability of getting vertical polarization, but for a single photon with arbitrary polarization, there's no way to determine exactly what the superposition state is. With a single measurement, all you know is that you measured either horizontal or vertical polarization, but the actual outcome of the measurement is random, so you don't learn anything about x or y.
Philosophically, this is a somewhat disturbing state of affairs. Physicists before quantum theory were accustomed to thinking of objects having definite properties, and measuring them at their leisure, but quantum mechanics destroys that certainty about the properties of matter, and replaces it with a fundamental indeterminacy. Einstein was among the physicists who was profoundly bothered by this, and he had a long-running argument with Niels Bohr about this issue. Einstein would construct some ingenious method for determining both the position and the momentum of some particle, counter to the uncertainty principle (seen up in the site banner), and Bohr would be teporarily set back. After a while, though, he always found some counter-argument, to show that the process of measuring the position would destroy information about the momentum, and vice versa.
(Somebody famously remarked that Einstein was much smarter than Bohr, but Bohr had the advantage of being right, and thus always won the arguments...)
After a couple of defeats, Einstein teamed up with Podolsky and Rosen to come up with an ingenious way of attacking the problem. Instead of a single system, they proposed to do experiments with an entangled state of two particles. In polarization terms, the relevant state looks like this:
ψ = ↑A ↑B + →A →B
The idea here is that you have two experimenters, canonically named Alice and Bob, who each receive one photon of this partticular state (designated by the subscripts). The states of these photons are "entangled," in that measuring one of them lets you know the state of the other-- if Alice's photon is vertical, Bob's is also vertical, and if Bob's photon is horizontal, Alice's is also horizontal. Either of them has a 50% chance of measuring vertical polarization, but if they measure vertical polarization, they know with 100% certainty that the other will get the same result.
The EPR argument that quantum theory is incomplete centers around this correlation. After all, nothing was said about the relative positions of Alice and Bob, or the timing of their measurements. Alice and Bob could be a million miles apart, and make their measurements within a nanosecond of one another, and they will always find the same correlation. Furthermore, the correlation does not depend on what angle they use-- if they set their polarizers at 45 degrees, they'll still find their results to be absolutely correlated, because you can re-write the horizontal and vertical polarizations as a sum of two other orthogonal angles-- up and to the left plus up and to the right, say (HTML doesn't provide the appropriate arrows).
Einstein, Podolsky, and Rosen used this correlation to argue that this sort of correlation (they didn't talk about it in terms of polarization, but the argument is the same) points to a gap in orthodox quantum theory. The money quote is:
If, without in any way disturbing a system, we can preict with certainty the value of a physical quantity, then there exists an element of physical reality corresponding to that quantity.
In other words, given that Alice and Bob always get the same answers, no matter how far apart they are, and how quickly they make the measurements, then the state of the photons must be defined in advance. Otherwise, there's no way for Alice and Bob to always get the same answer-- since they can be separated by arbitrary distance, there's no way for a measurement made by Alice on her photon to affect the measurement that Bob makes on his photon. Any signal from one to the other would need to travel at speeds greater than the speed of light, an idea that Einstein derided as "spooky action at a distance," one of the great phrases in the history of physics.
The sort of theory that the EPR paper is pushing is called a Local Hidden Variable theory. "Hidden Variable" because there is some property associated with the photon state that determines the outcome of the measurement, but is unknown to the experimenters. "Local" because the property is assumed to be associated with individual particles, and measurements made on separated particles are assumed to be completely independent.
Localiity is also the loophole in the argument, from the perspective of regular quantum theory. The solution, in very rough terms, is to say that quantum mecahnics is non-local-- that the measurement on Alice's photon does affect the state of Bob's photon, because the two photons are really one big system, extending over a large distance. The EPR condition of "without in any way disturbing a system" is not met, then, because Alice's measurement disturbs Bob's photon, in some spooky long-distance way. In fact, you can say that Alice's measurement determines the state of Bob's photon, because the state is indeterminate until it's measured.
Bohr made a slightly garbled version of this counter-argument a few months after the EPR paper was published in 1935, and things pretty much sat there for thirty years. People who believed quantum mechanics thought that the theory was non-local, while people who were unhappy with the theory believed in local hidden variables, but nobody saw any way to resolve the debate until 1964.
In 1964, the Irish physicist John Bell came up with a way to determine whether the world could really be described by a local hidden variable theory. The key realization was that there's no way to sort things out with a single measurement, but if you take a combination of different measurements, you find that there are limits on the possible outcomes using a local hidden variable theory.
It's easiest to illustrate this with a toy model. We imagine that Alice and Bob are each sent a photon from an entangled pair, and each of them has a polarization detector that can be set to one of three angles, a, b, or c. They record the results of each measurement as either a 0 or a 1-- for example, if angle a is vertical, detection of a vertically polarized photon would be a 1, while detection of a horizontally polarized photon would be a 0. Alice and Bob randomly vary the angles of their detectors, and repeat the experiment many times, so they eventually explore all of the possible combinations of measurements.
If you set it up with the states correlated so that Alice and Bob get opposite results when they measure the same angle, the question you ask is "What is the probability that Alice and Bob get opposite results?" It doesn't matter what the angles are-- just what the overall probability is for Alice getting a 1 when Bob gets a 0, or vice versa.
In a local hidden variable theory, each photon must be carrying three numbers, describing the results of a measurement along each of the three angles. This means there are a total of eight possible states for the two-photon system, which we can enumerate in a handy table:
Alice | Bob | ||||||
---|---|---|---|---|---|---|---|
State | a | b | c | a | b | c | |
1 | 1 | 1 | 1 | 0 | 0 | 0 | |
2 | 1 | 1 | 0 | 0 | 0 | 1 | |
3 | 1 | 0 | 1 | 0 | 1 | 0 | |
4 | 1 | 0 | 0 | 0 | 1 | 1 | |
5 | 0 | 1 | 1 | 1 | 0 | 0 | |
6 | 0 | 1 | 0 | 1 | 0 | 1 | |
7 | 0 | 0 | 1 | 1 | 1 | 0 | |
8 | 0 | 0 | 0 | 1 | 1 | 1 |
If we assume that the eight possible states are equally likely, it turns out that there's always a 50% chance of Alice and Bob getting opposite results. For example, if Alice sets her detector to a, and Bob puts his at b, then states 1 and 2 give Alice a 1 and Bob a 0, while states 7 and 8 give Alice a 0 and Bob a 1. If Alice sets her detector to c and Bob to a, states 1, 3, 6, and 8 give opposite results, and so on. For any possible combination of measurements, there are four states that give oppostite results.
If you fiddle with the angles and the distribution of states, you can push that down a little bit, but you're constrained by the fact that Alice and Bob need to have a 50% chance of measuring 1 for each choice. The lowest possible probability you can get for Alice and Bob finding opposite results turns out to be 1/3rd, or 33%.
Quantum theory, on the other hand, gives a different prediction. In the quantum case, the correlations can be much stronger, because (loosely speaking) Alice's measurement determines the state of Bob's photon. For the right choice of angles, the quantum probability can be as low as 1/4th, or 25%. 25% is easily distinguishable from 33%, so Alice and Bob have a way to determine whether it's possible for this local hidden variable theory to describe their photons.
This is a single concrete example, but Bell's theorem is stronger than that. What Bell did was to show that any conceivable local hidden variable theory is limited in the possible results it can predict for a set of measurements. There can be a wide range of different hidden variable theories, each predicting a different overall probability, but they'll be bounded in some way-- there'll be a minimum or a maximum value of some parameter. These relationships are colloquially known as "Bell Inequalities," and there are lots of different versions for different sets of measurements.
Quantum theory is subject to different limits, due to the stronger correlations possible with non-local states, and for the right choice of angle, you can find a probability that is larger or smaller than the limit for a local hidden variable theory. The game, then, is to find a set of measurements where the quantum prediction is different than the classical prediction, and run the experiment.
Of course, it's pretty difficult to set up the necessary experiments, so almost twenty years passed before anybody was able to do a conclusive test. The first group to get this done was a French group led by Alain Aspect, who earned a spot in the Top Eleven Physics Experiments of All Time as a result. Aspect and his co-workers actually did three experiments, each more sophisticated than the last, but each experiment left a loophole. The loopholes get smaller as you go on, but to the best of my knowledge, they haven't been fully closed.
(Much of the following discussion is shamelessly cribbed from this post back in October, because this has gotten really long, and I'm lazy...)
The first experiment sets up the basic parameters. Aspect and his co-workers used an atomic cascade source that they knew would produce two photons in rapid succession. According to quantum mechanics, when these two photons are headed in opposite directions, their polarizations are correlated in exactly the same way as the EPR states. Each photon could be polarized either horizontally or vertically, but no matter what polarization it has when it's measured, the other will be measured to have the same polarization.
Since the goal here is to measure correlations between polarizations, Aspect set up two detectors, with polarizers in front of each detector, and measured the number of times that the two detectors each recorded a photon for different settings of the polarizers. The particular quantity they were measuring is limited to be between -1 and 0, according to Bell's theorem, and they measured a value of 0.126, with an uncertainty of 0.014. Their results showed that the measured correlation was nine standard deviations outside the limits Bell's theorem sets for LHV theories, which means there's something like one chance in a billion of that happening by accident.
So, LHV theories are dead, right? Well, not exactly. There's a loophole in the experiment, because the detectors weren't 100% efficient, and there was some chance that they would miss photons. Since the experiment used only a single detector and a single polarizer for each beam, they could only infer the polarization of some of the photons-- a vertically polarized photon sent at a vertically oriented polarized produces a count from the detector, while a horizontally polarized photon produces nothing. In some cases, then, the absence of a count was the significant piece of information, and was taken as a signal that the polarization was horizontal when the polarizer was vertical.
This leaves a small hole for the LHV theorist to wiggle through, as it's possible that either through bad luck or the sheer perversity of the universe, some of those not-counts were vertically polarized photons that just failed to register. If you posit enough missed counts, and the right counts being missed, you can make their results consistent with LHV theories.
So, Aspect and his co-workers did a second experiment, to close the detector efficiency loophole. In this experiment, they used four detectors, two for each photon, and polarizing beamsplitters to arrange it so that each photon was definitely measured. A vertically polarized photon would pass straight through the beamsplitter, and fall on one detector, while a horizontally polarized photon would be reflected, and fall on the other detector. Whatever the polarization, and whatever the setting of the polarizer, each photon will be detected somewhere, so there are no more missed counts.
They did this experiment, and again found results that violate the limits set by Bell's Theorem-- they measured a value of 2.697 with an uncertainty of 0.015 for a quantity that Bell's Theorem limits to be less than or equal to 2. That's forty standard deviations, and the probability of that occurring by chance is so small as to be completely ridiculous.
So, LHV theories are dead, right? Well, no, because there's still a loophole. The angles of the polarizers were set in advance, so it's conceivable that some sort of message could be sent from the polarizers to the photon source, to tell the photons what values to have. If you allow communication between the detectors and the source, you can arrange for the photons to have definite values, and still match the quantum prediction. In our toy model above, this amounts to choosing the right distribution of the possible hidden-variable states to allow you to match the quantum prediction, and that's always possible if you have advance knowledge of the angles to be measured.
So, they did a third experiment, again using four detectors. This time, rather than using polarizing beamsplitters, they put fast switches in each of the beams, and sent the photons to one of two detctors. Each detector had a single polarizer in front of it, set to a particular angle. The switches were used to determine which detector each photon would be sent to, which is equivalent to changing the angle of the polarizer. And the key thing is, the switch settings were changed very rapidly, so that the two photons were already in flight before the exact setting was determined. A signal from the detector to the source would need to go back in time in order to assign definite values to the photon polarizations, which isn't allowed for a LHV theory.
This version of the experiment, like the other two, produced a violation of the limits set by Bell's theorem for LHV theories. The violation is smaller-- 0.101 with an uncertainty of 0.02, or only five standard deviations-- because the experiment is ridiculously difficult, but it's still not likely to occur by chance.
So, LHV theories are finally dead, right? Not really, because the experiment only used a single polarizer for each detector. This re-opens the detector efficiency loophole, and lets LHV theories sneak by in the missed counts.
Aspect quit at this point, though, because closing both of these loopholes at once would require eight detectors to go with the fast switches, and, really, who needs the headache? More recent experiments have improved the bounds, but to the best of my knowledge, none have completely closed all the possible loopholes (and there are plenty of them).
What does all this mean? Well, it means that the world is a really weird place. This quantum business of photon states being indeterminate until they're measured is not just some interesting philosophical quirk, it's experimentally tested reality. If you want to describe the universe accurately, you need a non-local theory to do it. Weird as it seems, there is some spooky action at a distance.
(It should be noted that Bell's theorem doesn't rule out all hidden-variable theories, just the local ones. You can have a hidden variable theory, but it needs to include some non-local element to match the Aspect experiments. David Bohm actually constructed a theory of quantum mechanics in which all particle states are perfectly well determined, but there is a non-local "quantum potential" that determines those properties. The whole thing is pretty weird, but it's perfectly consistent.)
So, does this give us a method for sending signals faster than the speed of light? No. The measurements made by Alice and Bob are always perfectly correlated, but correlation does not equal communication. The outcomes of those measurements are random-- each photon will give a "0" or a "1" with 50% probability-- so there's no information transfer. Alice and Bob each end up with a random string of 1's and 0's, and they only know that they have the same string of random numbers when they get back together later on, and compare their results using slower-than-light communications.
If there were a way for Alice to decide which result she would get, then this would be a viable FTL communication method. Now that Princeton is closing down their ESP research lab, though, you won't find many people who believe Alice can determine the polarization of a photon just by wishing really hard for a "1." I'm not too bothered by Stross's name-checking of entanglement as a resource for this, though-- I can pretend that the super godlike posthumans of his universe have managed to concoct a deeper theory that allows them to manipulate the outcomes of quantum measurements, and actually send measurements.
If you can't use it to send messages, what is this good for? Well, it turns out there is one application where it comes in really handy for Alice and Bob to have identical strings of completely random numbers, and that's cryptography. The process is called "quantum key distribution," and with it you can not only ensure that Alice and Bob each have a random string to use in encrypting and decrypting a message, but also that nobody else can "listen in" and determine the key for themselves.
But this is way too long already, so explaining quantum cryptography will have to wait for another post. Or, possibly, next Boskone...
- Log in to post comments
"A vertically polarized photon would pass stright through the beamsplitter, and fall on one detecotr, while a vertically polarized photon would be reflected, and fall on the other detector."
small typo.
Actually, several typos. Fixed now, thanks.
The Wikipedia article on Loopholes in Bell test experiments looks like it's been through a wringer and isn't in a very good state yet. One section is still marked as needing a complete rewrite, and the referencing is not too good either. I'm not sure the article can be a worthwhile resource for someone who doesn't already know the material. . . .
The Wikipedia article on Loopholes in Bell test experiments looks like it's been through a wringer and isn't in a very good state yet. One section is still marked as needing a complete rewrite, and the referencing is not too good either.
I think I remember it being a little better back when I first linked that, but I could be wrong. But yeah, it's a mess.
I'm not sure the article can be a worthwhile resource for someone who doesn't already know the material. . . .
I find that's true of most physics/ math articles on Wikipedia. They tend to be hyper-technical in a way that's not particularly useful to the layman.
Sandu Popescu came in to our school the other week to give a "popular lecture" on this stuff. What really struck me out of what he said was, "Almost all states are nonlocal." (The exceptions have measure zero.) Nonlocality isn't this weird property that you have to set up a really odd experiment to get. It's the norm.