Scientists solve critical challenge of error correction for building large scale fault tolerant quantum computers

‘The development of a “quantum computer” is one of the outstanding technological challenges of the 21st century. A quantum computer is a machine that processes information according to the rules of quantum physics, which govern the behaviour of microscopic particles at the scale of atoms and smaller said Dr Chris Ballance, a research fellow at Magdalen College, Oxford. It turns out that this quantum-mechanical way of manipulating information gives quantum computers the ability to solve certain problems far more efficiently than any conceivable conventional computer.

One such problem is related to breaking secure codes, while another is searching large data sets. Quantum computers are naturally well-suited to simulating other quantum systems, which may help, for example, our understanding of complex molecules relevant to chemistry and biology.’ Quantum computers has many applications in military too like efficient decoding of cryptographic codes like RSA, AI / Pattern recognition tasks like discriminating between missile and decoy, Bioinfromatics like efficient analysis of new bioengineered threat using MCMC (Markov Chain Monte Carlo) methods.

In order to reach their full potential, today’s quantum computer prototypes have to meet specific criteria: First, they have to be made bigger, which means they need to consist of a considerably higher number of quantum bits. Second, they have to be capable of processing errors.  Quantum systems are naturally fragile: they constantly evolve in uncontrolled ways due to unwanted interactions with the environment, leading to errors in the computation.

One of the main difficulties of quantum computation is that decoherence destroys the information in a superposition of states contained in a quantum computer, thus making long computations impossible. Quantum error correction is used to protect quantum information from errors due to decoherence and other quantum noise.

Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements. Demonstrating error correction that actually works is the biggest remaining challenge for building a quantum computer.

A study led by physicists at Swansea University in Wales, carried out by an international team of researchers and published in the journal Physical Review X shows that ion-trap technologies available today are suitable for building large-scale quantum computers. The scientists introduce trapped-ion quantum error correction protocols that detect and correct processing errors.

Researchers have been developing quantum error correction  code that would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time.

Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error correction, which is used to design noise-tolerant quantum computing protocols. In a new study, they have demonstrated that a type of neural network called a Boltzmann machine can be trained to model the errors in a quantum computing protocol and then devise and implement the best method for correcting the errors.

Quantum Error Correction

Classical error correction employs redundancy for instance by storing the information multiple times, and—if these copies are later found to disagree—just take a majority vote. However, copying quantum information is not possible due to the no-cloning theorem, but it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits , even hundreds or thousands of them.

Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. Quantum error-correction codes take advantage of these other qubits to uncover the errors without really resorting to copying the value of the original qubit. The basis of quantum error correction is measuring parity. The parity is defined to be “0” if both qubits have the same value and “1” if they have different values. Crucially, it can be determined without actually measuring the values of both qubits.

Scientists at the Yale University showed that it is possible to track quantum errors in real time. The team used ancilla or a more stable reporter atom that detected errors in the system without actually disturbing any qubits. During the experiment, researchers used a superconducting box. The box had the reporter atom as well as an unknown number of photons, which were cooled to about negative 459°F, a fraction of a degree above absolute zero. The ancilla just reports photon parity – whether there was a change from even to odd/odd to even photons in the box – and not exact numbers, according to Researchers.

Trapping ions in a maze

Scientists have developed comparable schemes for quantum computers, where quantum information is encoded in several entangled physical quantum bits.”Here we exploit quantum mechanical properties for error detection and correction,” explains Markus Müller from Swansea University, Wales. “If we can keep the noise below a certain threshold, we will be able to build quantum computers that can perform quantum computations of arbitrary complexity by increasing the number of entangled quantum bits accordingly.”

Markus Müller and his colleague Alejandro Bermudez Carballo explain that in order to achieve this goal, the capabilities of the technological platforms have to be optimally exploited. “For beneficial error correction we need quantum circuits that are stable and work reliably under realistic conditions even if additional errors occur during the error correction,” explains Bermudez. They introduced new variants of fault-tolerant protocols and investigated how these can be implemented with currently available operations on quantum computers.

The researchers found that a new generation of segmented ion traps offers ideal conditions for the process: Ions can be shuttled quickly across different segments of the trap array. Precisely timed processes allow parallel operations in different storage and processing regions. By using two different types of ions in a trap, scientists may use one type as carriers of the data qubits while the other one may be used for error measurement, noise suppression and cooling.

Building on the experimental experience of research groups in Innsbruck, Mainz, Zurich und Sydney the researchers defined criteria that will allow the scientists to determine whether the quantum error correction is beneficial. By using this information they can guide the development of future ion-trap quantum computers with the goal to realize a logical quantum bit in the near future that, owed to error correction, exceeds the properties of a pure physical quantum bit.

Simon Benjamin’s research group at the University of Oxford showed through complex numerical simulations of the new error correction protocols how the hardware of next generation ion-trap quantum computers has to be built to be able to process information fault-tolerantly. “Our numerical results clearly show that state-of-the-art ion-trap technologies are well suited to serve as platforms for constructing large-scale fault-tolerant quantum computers,” explains Benjamin.

Repetitive error correction

Researchers at the University of California, Santa Barbara (UCSB) and Google have demonstrated repetitive error correction in an integrated quantum device that consists of nine superconducting qubits. “Our nine-qubit system can protect itself from bit errors that unavoidably arise from noise and fluctuations from the environment in which the qubits are embedded,” explains team member Julian Kelly.

The researchers repetitively measured the parity between adjacent “data” qubits by making use of “measurement” qubits. “Each cycle, these measurement qubits interact with their surrounding data qubits using quantum logic gates and we can then measure them,” Kelly explains. “When an error occurs, the parity changes accordingly and the measurement qubit reports a different outcome. By tracking these outcomes, we can figure out when and where a bit error has occurred and correct for it.”

The more qubits that are involved in the process, the more information is available to identify and correct for errors, explains team member Austin Fowler. “Errors can occur at any time and in all types of qubits: data qubits, measurement qubits, during gate operation and even during measurements. We found that a five-qubit device is robust to any type of bit error occurring anywhere during an algorithm, but a nine-qubit device is better because it is robust to any combination of two-bit errors.”

Surface Code Architecture

The Google and UCSB team eventually hope to build a 2-D surface code architecture based on a checkerboard arrangement of qubits, so that “white squares” would represent the data qubits that perform operations and “black squares” would represent measurement qubits that can detect errors in neighboring qubits. The “measurement” qubits are entangled with neighboring “data” qubits, share information through a quantum connection.

The surface code architecture allows a lower accuracy of quantum logic operations, 99 percent instead of 99.999 percent in other quantum error-correction schemes. IBM researchers have also done pioneering work in making surface-code error correction work with superconducting qubits. One IBM group demonstrated a smaller three-qubit system capable of running surface code, although that system had a lower accuracy—94 percent.

Currently, groups are modifying the material properties of their qubits, improving lithography techniques and improving pulse-shaping techniques to make qubit lifetimes longer. This should increase the fidelity of the qubits and make implementing a surface code less resource-intensive.

Researchers prevent quantum errors from occurring by continuously watching a quantum system

A team led by Tim Taminiau managed to suppress  quantum errors through the so-called quantum Zeno effect. A team of scientists led by Tim Taminiau of QuTech, the quantum institute of TU Delft and TNO, has now experimentally demonstrated that errors in quantum computations can be suppressed by repeated observations of quantum bits. If an observable of a quantum state is measured, the system is projected into an eigenstate of this observable. For example, if a qubit in a superposition of ‘0’ and ‘1’ is observed, the qubit is projected into either ‘0’ or ‘1’ and will remain frozen in that state under repeated further observations.

Joint observables

While just freezing a quantum state by projecting a single qubit does not allow for computations, new opportunities arise when observing joint properties of multi-qubit systems. The projection of joint properties of qubits can be explained with the following analogy: consider grouping three-dimensional objects based on their two-dimensional projection. Shapes can still transform within a subgroup (for example between a cube and a cylinder), but unwanted changes (for example to a sphere) are suppressed by the constant observations of the 2D projection. Similarly, the projection of joint observables in multi-qubit systems generates quantum subspaces. In this way, unwanted evolution between different subspaces can be blocked, while the complex quantum states within one subspace allow for quantum computations.


The QuTech scientists experimentally generated quantum Zeno subspaces in up to three nuclear spins in diamond. Joint observables on these nuclear spins are projected via a nearby electronic spin, generating protected quantum states in Zeno subspaces. The researchers show an enhancement in the time that quantum information is protected with increasing number of projections and derive a scaling law that is independent of the number of spins. The presented work allows for the investigation of the interplay of frequent observations and various noise environments. Furthermore, the projection of joint observables is the basis of most quantum error correction protocols, which are essential for useful quantum computations.

New Quantum error correction Protocol corrects virtually all errors in quantum memory, but requires little measure of quantum states.

The ideal quantum error correction code would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time. But until now, codes that could make do with limited measurements could correct only a limited number of errors — one roughly equal to the square root of the total number of qubits. So they could correct eight errors in a 64-qubit quantum computer, for instance, but not 10.

In a paper they’re presenting at the Association for Computing Machinery’s Symposium on Theory of Computing in June, researchers from MIT, Google, the University of Sydney, and Cornell University present a new code that can correct errors afflicting — almost — a specified fraction of a computer’s qubits, not just the square root of their number. And for reasonably sized quantum computers, that fraction can be arbitrarily large — although the larger it is, the more qubits the computer requires.

A quantum computation is a succession of states of quantum bits. The bits are in some state; then they’re modified, so that they assume another state; then they’re modified again; and so on. The final state represents the result of the computation.

In their paper, Harrow and his colleagues assign each state of the computation its own bank of qubits; it’s like turning the time dimension of the computation into a spatial dimension. Suppose that the state of qubit 8 at time 5 has implications for the states of both qubit 8 and qubit 11 at time 6. The researchers’ protocol performs one of those agreement measurements on all three qubits, modifying the state of any qubit that’s out of alignment with the other two.

Since the measurement doesn’t reveal the state of any of the qubits, modification of a misaligned qubit could actually introduce an error where none existed previously. But that’s by design: The purpose of the protocol is to ensure that errors spread through the qubits in a lawful way. That way, measurements made on the final state of the qubits are guaranteed to reveal relationships between qubits without revealing their values. If an error is detected, the protocol can trace it back to its origin and correct it.

It may be possible to implement the researchers’ scheme without actually duplicating banks of qubits. But, Harrow says, some redundancy in the hardware will probably be necessary to make the scheme efficient. How much redundancy remains to be seen: Certainly, if each state of a computation required its own bank of qubits, the computer might become so complex as to offset the advantages of good error correction.

But, Harrow says, “Almost all of the sparse schemes started out with not very many logical qubits, and then people figured out how to get a lot more. Usually, it’s been easier to increase the number of logical qubits than to increase the distance — the number of errors you can correct. So we’re hoping that will be the case for ours, too.”

Stephen Bartlett, a physics professor at the University of Sydney who studies quantum computing, doesn’t find the additional qubits required by Harrow and his colleagues’ scheme particularly daunting. “It looks like a lot,” Bartlett says, “but compared with existing structures, it’s a massive reduction. So one of the highlights of this construction is that they actually got that down a lot.”

Machine learning tackles quantum error correction

The physicists, Giacomo Torlai and Roger G. Melko at the University of Waterloo and the Perimeter Institute for Theoretical Physics, have published a paper on the new machine learning algorithm in a recent issue of Physical Review Letters.

“The idea behind neural decoding is to circumvent the process of constructing a decoding algorithm for a specific code realization (given some approximations on the noise), and let a neural network learn how to perform the recovery directly from raw data, obtained by simple measurements on the code,” Torlai told “With the recent advances in quantum technologies and a wave of quantum devices becoming available in the near term, neural decoders will be able to accommodate the different architectures, as well as different noise sources.”

As the researchers explain, a Boltzmann machine is one of the simplest kinds of stochastic artificial neural networks, and it can be used to analyze a wide variety of data. Neural networks typically extract features and patterns from raw data, which in this case is a data set containing the possible errors that can afflict quantum states.

Once the new algorithm, which the physicists call a neural decoder, is trained on this data, it is able to construct an accurate model of the probability distribution of the errors. With this information, the neural decoder can generate the appropriate error chains that can then be used to recover the correct quantum states.

The researchers tested the neural decoder on quantum topological codes that are commonly used in quantum computing, and demonstrated that the algorithm is relatively simple to implement. Another advantage of the new algorithm is that it does not depend on the specific geometry, structure, or dimension of the data, which allows it to be generalized to a wide variety of problems.

In the future, the physicists plan to explore different ways to improve the algorithm’s performance, such as by stacking multiple Boltzmann machines on top of one another to build a network with a deeper structure. The researchers also plan to apply the neural decoder to more complex, realistic codes.

“So far, neural decoders have been tested on simple codes typically used for benchmarks,” Torlai said. “A first direction would be to perform error correction on codes for which an efficient decoder is yet to be found, for instance Low Density Parity Check codes. On the long term I believe neural decoding will play an important role when dealing with larger quantum systems (hundreds of qubits). The ability to compress high-dimensional objects into low-dimensional representations, from which stems the success of machine learning, will allow to faithfully capture the complex distribution relating the errors arising in the system with the measurements outcomes.”


References and Resources also include