Home / Technology / AI & IT / Error-correction in Quantum Computers using Machine learning and Neural Networks proving successful strategy

Error-correction in Quantum Computers using Machine learning and Neural Networks proving successful strategy

‘The development of a “quantum computer” is one of the outstanding technological challenges of the 21st century. A quantum computer is a machine that processes information according to the rules of quantum physics, which govern the behaviour of microscopic particles at the scale of atoms and smaller said Dr Chris Ballance, a research fellow at Magdalen College, Oxford. It turns out that this quantum-mechanical way of manipulating information gives quantum computers the ability to solve certain problems far more efficiently than any conceivable conventional computer. In order to reach their full potential, today’s quantum computer prototypes have to meet specific criteria: First, they have to be made bigger, which means they need to consist of a considerably higher number of quantum bits. Second, they have to be capable of processing errors.

 

Unlike binary bits of information in ordinary computers, “qubits” have property of superposition of states where quantum particles  have some probability of being in each of two states, designated |0⟩ and |1⟩, at the same time. One of the main difficulties of quantum computation is that decoherence destroys the information in a superposition of states contained in a quantum computer, thus making long computations impossible. Quantum systems are naturally fragile: they constantly evolve in uncontrolled ways due to unwanted interactions with the environment, leading to errors in the computation.

 

Qubits can interact with anything in close proximity that carries energy close to their own—stray photons (i.e., unwanted electromagnetic fields), phonons (mechanical oscillations of the quantum device), or quantum defects (irregularities in the substrate of the chip formed during manufacturing)—which can unpredictably change the state of the qubits themselves.

 

Further complicating matters, there are numerous challenges posed by the tools used to control qubits. Manipulating and reading out qubits is performed via classical controls: analog signals in the form of electromagnetic fields coupled to a physical substrate in which the qubit is embedded, e.g., superconducting circuits. Imperfections in these control electronics (giving rise to white noise), interference from external sources of radiation, and fluctuations in digital-to-analog converters, introduce even more stochastic errors that degrade the performance of quantum circuits. These practical issues impact the fidelity of the computation and thus limit the applications of near-term quantum devices.

 

Now, these neural networks are being used to develop quantum error correction systems. Artificial neural networks can autonomously find quantum error correction strategies to protect quantum information from decoherence. Researchers at the Max Planck Institute for the Science of Light (MPL) took another approach by resorting to deep neural networks to develop an error correction system for quantum decoherence. The MPL team used a neural network architecture made of two thousand artificial neurons inspired by AlphaGo.

 

“Artificial neural networks with an AlphaGo-inspired architecture are capable of learning – for themselves – how to perform a task that will be essential for the operation of future quantum computers: quantum error correction. There is even the prospect that, with sufficient training, this approach will outstrip other error-correction strategies.” With this neural-network-based quantum error correction “There is even the prospect that, with sufficient training, this approach will outstrip other error-correction strategies.”

 

“The solution comes in the form of an additional neural network that acts as a teacher to the first network. With its prior knowledge of the quantum computer that is to be controlled, this teacher network is able to train the other network – its student – and thus to guide its attempts towards successful quantum correction.” Researchers also developed a reward system to incentivize both the student and the teacher neural networks to find the best quantum error correction strategy without human assistance.

 

Google reserchers demonstrate reduction in quantum logic gate errors through machine learning

In “Universal Quantum Control through Deep Reinforcement Learning”, published in Nature Partner Journal (npj) Quantum Information, we present a new quantum control framework generated using deep reinforcement learning, where various practical concerns in quantum control optimization can be encapsulated by a single control cost function. Our framework provides a reduction in the average quantum logic gate error of up to two orders-of-magnitude over standard stochastic gradient descent solutions and a significant decrease in gate time from optimal gate synthesis counterparts. Our results open a venue for wider applications in quantum simulation, quantum chemistry and quantum supremacy tests using near-term quantum devices.

 

The novelty of this new quantum control paradigm hinges upon the development of a quantum control function and an efficient optimization method based on deep reinforcement learning. To develop a comprehensive cost function, we first need to develop a physical model for the realistic quantum control process, one where we are able to reliably predict the amount of error. One of the most detrimental errors to the accuracy of quantum computation is leakage: the amount of quantum information lost during the computation. Such information leakage usually occurs when the quantum state of a qubit gets excited to a higher energy state, or decays to a lower energy state through spontaneous emission. Leakage errors not only lose useful quantum information, they also degrade the “quantumness” and eventually reduce the performance of a quantum computer to that of a classical one.

 

A common practice to accurately evaluate the leaked information during the quantum computation is to simulate the whole computation first. However, this defeats the purpose of building large-scale quantum computers, since their advantage is that they are able to perform calculations infeasible for classical systems. With improved physical modeling, our generic cost function enables a joint optimization over the accumulated leakage errors, violations of control boundary conditions, total gate time, and gate fidelity.

 

With the new quantum control cost function in hand, the next step is to apply an efficient optimization tool to minimize it. Existing optimization methods turn out to be unsatisfactory in finding high fidelity solutions that are also robust to control fluctuations. Instead, we apply an on-policy deep reinforcement learning (RL) method, trusted-region RL, since this method exhibits good performance in all benchmark problems, is inherently robust to sample noise, and has the capability to optimize hard control problems with hundreds of millions of control parameters. The salient difference between this on-policy RL from previously studied off-policy RL methods is that the control policy is represented independently from the control cost. Off-policy RL, such as Q-learning, on the other hand, uses a single neural network (NN) to represent both the control trajectory, and the associated reward, where the control trajectory specifies the control signals to be coupled to qubits at different time steps, and the associated award evaluates how good the current step of the quantum control is.

 

On-policy RL is well known for its ability to leverage non-local features in control trajectories, which becomes crucial when the control landscape is high-dimensional and packed with a combinatorially large number of non-global solutions, as is often the case for quantum systems. We encode the control trajectory into a three-layer, fully connected NN—the policy NN—and the control cost function into a second NN—the value NN—which encodes the discounted future reward. Robust control solutions were obtained by reinforcement learning agents, which trains both NNs under a stochastic environment that mimics a realistic noisy control actuation. We provide control solutions to a set of continuously parameterized two-qubit quantum gates that are important for quantum chemistry applications but are costly to implement using the conventional universal gate set.

 

Under this new framework, our numerical simulations show a 100x reduction in quantum gate errors and reduced gate times for a family of continuously parameterized simulation gates by an average of one order-of-magnitude over traditional approaches using a universal gate set. This work highlights the importance of using novel machine learning techniques and near-term quantum algorithms that leverage the flexibility and additional computational capacity of a universal quantum control scheme. More experiments are needed to integrate machine learning techniques, such as the one developed in this work, into practical quantum computation procedures to fully improve its computational capacity through machine learning.

Neural Network Improves Quantum Tomography

Scientists at Skolkovo Institute of Science and Technology (Skoltech) have applied machine learning to the challenges of reconstructing quantum states. Their findings show that machine learning can reconstruct quantum states from experimental data even in the presence of noise and detection errors. Members of Skoltech’s Deep Quantum Laboratory collaborated with the quantum optics research laboratories at Moscow State University (MSU) on the research.

 

To prepare and measure high-dimensional quantum states, the MSU team generated data with an experimental platform based on spatial states of photons. The Skoltech team implemented a deep neural network to analyze the noisy experimental data and learned how to efficiently perform denoising, significantly improving the quality of quantum state reconstruction.

 

To implement their method experimentally, the researchers trained a supervised neural network to filter the experimental data. The neural network uncovered patterns that characterized the measurement probabilities for the original state and the ideal experimental apparatus, free from state-preparation-and-measurement (SPAM) errors.

 

The researchers compared the neural network state reconstruction protocol with a protocol treating SPAM errors by process tomography and also with a SPAM-agnostic protocol with idealized measurements. The average reconstruction fidelity was shown to be enhanced by 10% and 27%, respectively. The researchers believe that these results show that the use of a neural network architecture on experimental data could provide a reliable tool for quantum-state-and-detector tomography. The researchers’ approach could apply to the wide range of quantum experiments that rely on tomography.

 

Quantum tomography is currently used for testing the implementation of quantum information processing devices. Various procedures for state and process reconstruction from measured data have been developed using a model describing state-preparation-and-measurement (SPAM) apparatus. However, physical models can suffer from intrinsic limitations, as actual measurement operators and trial states cannot be known precisely. This can lead to SPAM errors, degrading reconstruction performance. The researchers’ framework, based on machine learning, can be applied to both the tomography and the mitigation of SPAM errors. Over the last several years, the researchers have applied a wide range of techniques to reconstructing a quantum state and, surprisingly, have found that deep learning outperformed other methods in experiments.

Machine learning tackles quantum error correction

The physicists, Giacomo Torlai and Roger G. Melko at the University of Waterloo and the Perimeter Institute for Theoretical Physics, have published a paper on the new machine learning algorithm in a recent issue of Physical Review Letters.

 

“The idea behind neural decoding is to circumvent the process of constructing a decoding algorithm for a specific code realization (given some approximations on the noise), and let a neural network learn how to perform the recovery directly from raw data, obtained by simple measurements on the code,” Torlai told Phys.org. “With the recent advances in quantum technologies and a wave of quantum devices becoming available in the near term, neural decoders will be able to accommodate the different architectures, as well as different noise sources.”

 

As the researchers explain, a Boltzmann machine is one of the simplest kinds of stochastic artificial neural networks, and it can be used to analyze a wide variety of data. Neural networks typically extract features and patterns from raw data, which in this case is a data set containing the possible errors that can afflict quantum states.

 

Once the new algorithm, which the physicists call a neural decoder, is trained on this data, it is able to construct an accurate model of the probability distribution of the errors. With this information, the neural decoder can generate the appropriate error chains that can then be used to recover the correct quantum states. The researchers tested the neural decoder on quantum topological codes that are commonly used in quantum computing, and demonstrated that the algorithm is relatively simple to implement. Another advantage of the new algorithm is that it does not depend on the specific geometry, structure, or dimension of the data, which allows it to be generalized to a wide variety of problems.

 

In the future, the physicists plan to explore different ways to improve the algorithm’s performance, such as by stacking multiple Boltzmann machines on top of one another to build a network with a deeper structure. The researchers also plan to apply the neural decoder to more complex, realistic codes.

 

“So far, neural decoders have been tested on simple codes typically used for benchmarks,” Torlai said. “A first direction would be to perform error correction on codes for which an efficient decoder is yet to be found, for instance Low Density Parity Check codes. On the long term I believe neural decoding will play an important role when dealing with larger quantum systems (hundreds of qubits). The ability to compress high-dimensional objects into low-dimensional representations, from which stems the success of machine learning, will allow to faithfully capture the complex distribution relating the errors arising in the system with the measurements outcomes.”

About Rajesh Uppal

Check Also

India’s Advances in AI Weaponization Amid Global Military AI Race

As the global military landscape evolves with advancements in Artificial Intelligence (AI), India is making …

error: Content is protected !!