Quantum mechanics is weird.

To be a physicist you have to accept that the microscopic, underlying reality of nature looks and behaves very differently to the laws of physics we are used to in our every day lives. Many physicists struggle to accept this, including Einstein.

Einstein disliked the fact that quantum mechanics, a theory he had helped to lay the foundations for, seemed to show that nature does not obey clear deterministic laws and that, instead, only probabilities of certain events can be predicted, with a quantum system having no definite properties, until it is observed. The act of observing disturbs the quantum system, causing it to immediately collapse into one the possible states. Einstein famously quipped:

God does not play dice with the universe.

In 1935, Einstein, along with his colleagues Podolsky and Rosen, published a paper that Einstein thought would sound the death knell for messy probabilities, and restore an orderly, deterministic, universe. The paper had the very click-batey title: Can Quantum-Mechanical Description of Physical Reality be Considered Complete?

The answer, as Einstein, Podolsky, and Rosen show, is “No”.

Their working showed that, for other aspects of quantum mechanics, such as the famous uncertainty principle, to hold true, if you created two particles whose quantum probabilities were dependent on each other, separated them a large distance apart, and then observed one and collapsed its state, the other particle would instantly collapse into the corresponding state. This requires that the information about the collapsed state of the first particle be transmitted instantaneously to the other particle. Since Einstein had famously shown that nothing could travel faster than the speed of light, this must be impossible.

This came to be known as the Einstein-Podolsky-Rosen paradox (EPR paradox), and it showed that, by requiring that information be transferred faster than the speed of light, something was seriously wrong with quantum mechanics. Einstein dubbed this impossible transfer of information “spooky action at a distance”, but today, physicists call it quantum entanglement.

The EPR paradox did not vanquish quantum theory, it just showed that it was incomplete. It presented the possibility that “hidden variables”, undiscovered deterministic laws of physics, were at play, deciding the outcome of the collapsed state, and only appearing to be probabilistic because we didn’t understand them yet.

The quest to uncover these hidden variables began. In 1964, physicist John Stewart Bell came up with an experiment, now called a Bell Test, that would be able to tell whether there were hidden variables at play, or whether instantaneous spooky action was really how the universe operated. An explanation of the mathematics and physics behind the Bell test would make for a long post on it’s own, so for now I will leave it to Veritasium to explain, but in short, if any type of hidden variables are at play, the Bell test will give a different result to that predicted by entanglement.

To make a Bell test, we generate a pair of entangled particles, separate them some distance apart, and then, as close to simultaneously as possible, measure the entangled property. We then repeat this process many thousands of times in order to build up a statistically significant set of data.

The key part of the Bell test is that certain measurement parameters have to be chosen randomly. A random number generator is given control over the settings of the equipment performing the measurement. Bell test experiments have been performed in various forms at laboratories around the world, and in every one of them, entanglement has come up trumps.

One of the funny things about Einstein; sometimes, even when he was wrong, he was right. In attempting to find a critical flaw in quantum mechanics, he predicted an “impossible” phenomenon, but one that turned out to be very real.

Physicists are always on the lookout for flaws in their experiments, and one possible flaw in the Bell test is the assumption of “freedom of choice”.

If the hidden variables had some way of knowing, before the experiment was performed, what the sequence of random numbers was going to be, it is possible that they could spoof a result that looks like entanglement.

We don’t really have any way around this problem, but we can push the necessary prior knowledge to some pretty improbable extremes.

Recently, one research group used random fluctuations in the light from quasars, bright galaxies billions of light years away, to generate the random numbers for their Bell test.

The team’s experiment generated pairs of entangled photons and then sent them to two telescopes one kilometre apart. The telescopes were trained on separate quasars billions of light years away. Random fluctuations in the light from the quasar was used to randomly set the equipment measuring the arriving entangled photons.

CosmicBell_1
Figure 1 from the cosmic Bell test paper showing the setup of the experiment.

The experiment, once again, produced results consistent with entanglement and denying any hidden variables. While the experiment still assumes we, or rather, the random number generator, has freedom of choice, because the random numbers were generated by objects billions of light years away, it means any hidden variables carrying prior knowledge of the random sequence must have acted at least 7.8 billion years ago.

This is possibly as close as we will ever come to closing the “freedom of choice loophole” in Bell test experiments. We can’t get our random number generators much more separated than this. Either there are no hidden variables and spooky action at a distance is real, or we live in a highly deterministic universe, with no free will, in which all actions and events were pre-set at the moment of the Big Bang.

Now that’s spooky.

SpookyAction1

Leave a comment