The past year has provided some of the most interesting developments in quantum mechanics to date. The field is more than 100 years old and has been tested to unimaginable precision, yet some of its most striking statements are still being debated.
This is true of even one of the theory’s oldest results, Heisenberg’s Uncertainty Principle, where certain properties of a quantum system, such as a particle’s position and momentum, can only be simultaneously measured to a finite precision.
This fundamental limit on the knowledge obtainable about a quantum state is due to the inescapable “back-action”, or disturbance, of the measurement device on the system being studied. Put simply, measurement changes a quantum state.
Over the past few years, experimental results published by groups in Vienna and Toronto, claimed to have exploited a loophole in the standard formulation of the uncertainty principle, thereby violating it.
In response, an article published in Physical Review A this year by a collaboration of European researchers has shown that, with a careful analysis of the experiments, a more general form of the principle remains intact.
These kinds of debates are not simply exercises in interpretational hair-splitting; our understanding of this once-obscure theory is maturing. We can now exploit its counterintuitive properties to invent technologies that are classically impossible. This progress simultaneously occurs on foundational and technological levels.
For good measure
Take the concept of weak measurements. These attempt to minimise the effect of a measurement on the quantum state of a system being measured, and thus circumvent the uncertainty principle. But only allowing the measurement device to interact weakly with the quantum system comes with a trade-off; the measurement will now only succeed in telling you something about the system part of the time.
By only considering those results which are deemed to have succeeded, it turns out that the statistical average of the quantity being measured can far exceed any possible result of a single measurement. In other words, it results in an amplified signal in a measurement device.
Despite prior claims to the contrary by Oxford physicists, recent research from the University of Southern California and University of California, Riverside showed that, with the added ingredient of quantum entanglement, weak measurements can be more precise than standard ones for “noisy” systems.
But the quantum credentials of weak values have been called into doubt. This year, researchers at the Center for Quantum Information and Control in New Mexico claimed that the kind of measurement amplification described above is not a uniquely quantum phenomenon.
Similar reasoning can be used for a classical coin toss, with a twist: in analogy to the quantum weak measurement, determining the outcome (heads or tails) is not possible with a 100% success rate and the coin itself can be turned over in the process. In this case, the probabilities involved can be contrived such that the statistical expectation value of a single flip of the coin can come out as 100 heads.
Nevertheless, Matthew Pusey of the Perimeter Institute in Waterloo, Canada subsequently showed that, for quantum systems, these strange results can have no classical explanation and that weak values are intimately related to a property of quantum mechanics known as contextuality.
Contextuality means that measured properties of quantum states depend intrinsically on how the measurement is made. In addition to enabling the weirdness of weak measurements, contextuality also seems to play a fundamental role in quantum computing.
Scientists at the University of Waterloo showed that contextuality, rather than entanglement or other quantum phenomena, supplies the “magic” for a certain quantum computation. This is an enormous breakthrough as, although we have ample evidence that a quantum computer is far superior to a classical one, we still do not understand exactly what property of quantum mechanics it is that gives it this advantage.
Along with these theoretical developments, experiments published in 2014 have brought us closer than ever to realising an operational quantum computer – the ultimate dream of thousands of scientists working in quantum physics.
Going beyond the Boolean 0s and 1s of classical computing, building one of these devices of sufficient size would allow us to solve problems which are beyond today’s most powerful supercomputers.
Many different physical systems are candidates for a working quantum computer, each with their own advantages and disadvantages, and this year major advances have been made in several of them.
In a diamond-based realisation – which has the advantage of being easily scalable and working at room temperature – a team from Tsinghua University, Beijing designed and implemented universal geometric logic gates. These are the basic building blocks from which any fault-tolerant quantum algorithm can be built, and are the main feature distinguishing “mainstream” multi-purpose quantum computing from less general quantum devices.
Optical quantum computers – which encode quantum information in light – also overcame an important hurdle this year when, for the first time, a collaboration between the University of Geneva and Stanford University coaxed a pair of single photons into interacting with each other. Until then, this was one of the missing ingredients in an otherwise well-developed and promising quantum technology.
Overall, 2014 has been an astounding year for the field of quantum physics, and the research mentioned above is a tiny fraction of that published in hundreds of papers over the past 12 months.
Contrary to popular misconceptions, in which science is built on a bedrock of immutable laws, even some of the most basic tenets of quantum mechanics are still the subject of contentious scientific debate.
By pushing the boundaries of the theory, we gain a greater understanding of, and hence control over, the phenomena that underpin our reality.