Trending News
Home / Technology / Manufacturing / Scientists attack error correction, the critical challenge for building large scale fault tolerant quantum computers

Scientists attack error correction, the critical challenge for building large scale fault tolerant quantum computers

‘The development of a “quantum computer” is one of the outstanding technological challenges of the 21st century.  Quantum computers gain their power from the special rules that govern qubits. Unlike classical bits, which have a value of either 0 or 1, qubits can take on an intermediate state called a superposition, meaning they hold a value of 0 and 1 at the same time. Additionally, two qubits can be entangled, with their values linked as if they are one entity, despite sitting on opposite ends of a computer chip.

 

These unusual properties give quantum computers their game-changing method of calculation. Different possible solutions to a problem can be considered simultaneously, with the wrong answers canceling one another out and the right one being amplified. That allows the computer to quickly converge on the correct solution without needing to check each possibility individually.It turns out that this quantum-mechanical way of manipulating information gives quantum computers the ability to solve certain problems far more efficiently than any conceivable conventional computer. One such problem is related to breaking secure codes, while another is searching large data sets.  Quantum computers has many applications in military too like efficient decoding of cryptographic codes like RSA, AI / Pattern recognition tasks like discriminating between missile and decoy, Bioinfromatics like efficient analysis of new bioengineered threat using MCMC (Markov Chain Monte Carlo) methods.

 

In order to reach their full potential, today’s quantum computer prototypes have to meet specific criteria: First, they have to be made bigger, which means they need to consist of a considerably higher number of quantum bits. Second, they have to be capable of processing errors.

 

Quantum systems made from Quantum bits — or qubits — are inherently fragile: they constantly evolve in uncontrolled ways due to unwanted interactions with the environment, leading to errors in the computation. They are made from sensitive substances such as individual atoms, electrons trapped within tiny chunks of silicon called quantum dots, or small bits of superconducting material, which conducts electricity without resistance. Errors can creep in as qubits interact with their environment, potentially including electromagnetic fields, heat or stray atoms or molecules.

 

Unlike binary bits of information in ordinary computers, “qubits” have property of superposition of states where quantum particles  have some probability of being in each of two states, designated |0⟩ and |1⟩, at the same time. One of the main difficulties of quantum computation is that decoherence destroys the information in a superposition of states contained in a quantum computer, thus making long computations impossible.  If a single atom that represents a qubit gets jostled, the information the qubit was storing is lost. Additionally, each step of a calculation has a significant chance of introducing error. As a result, for complex calculations, “the output will be garbage,” says quantum physicist Barbara Terhal of the research center QuTech in Delft, Netherlands.

 

Like classical error correction used to protect radio communications, Scientists are working on Quantum error correction, that can be used to protect quantum information from errors due to decoherence and other quantum noise.

Quantum Error Correction

Researchers have been devising a variety of methods for error correction. The idea behind many of these schemes is to combine multiple error-prone qubits to form one more reliable qubit. This is inspired by Classical error correction that employs redundancy for instance by storing the information multiple times, and—if these copies are later found to disagree—just take a majority vote. However, in contrast to the classical bits, copying of quantum information is not possible due to the no-cloning theorem, and it is not possible to get an exact diagnosis of qubit errors without destroying the stored quantum information.

 

Therefore these schemes must detect and correct errors without directly measuring the qubits, since measurements collapse qubits’ coexisting possibilities into definite realities: plain old 0s or 1s that can’t sustain quantum computations.  So schemes for quantum error correction apply some work-arounds. Rather than making outright measurements of qubits to check for errors, scientists perform indirect measurements, which “measure what error occurred, but leave the actual information [that] you want to maintain untouched and unmeasured.” For example, scientists can check if the values of two qubits agree with one another without measuring their values.

 

And rather than directly copying qubits, error-correction schemes store data in a redundant way, with information spread over multiple entangled qubits, collectively known as a logical qubit. When individual qubits are combined in this way, the collective becomes more powerful than the sum of its parts. Those logical qubits become the error-resistant qubits of the final computer. If your program requires 10 qubits to run, that means it needs 10 logical qubits — which could require a quantum computer with hundreds or even hundreds of thousands of the original, error-prone physical qubits. To run a really complex quantum computation, millions of physical qubits may be necessary.

 

Instead, the error correction has to rely on partial information known as the syndrome and based on this suggest the best way to correct errors. Because of the incomplete information this is a very challenging problem requiring sophisticated algorithms known as error decoders.

 

However,  it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits , even hundreds  or thousands of them. A way to deal with these errors and make computations fault-tolerant (FT) is to use Quantum Error Correction (QEC) in which multiple physical qubits are encoded into logical qubits, and errors are extracted and recognized by measuring ancilla qubits. However, this is easier said than done. First, the individual qubits must already have a high level of reliability before they can be interconnected. If they have an error rate of more than one percent, the connection to a logical qubit is actually counterproductive – the error rate would then increase instead of falling. In addition, the qubits must be connected in a very small space.

 

In 1995, Shor provided proof that “quantum error-correcting codes” exist. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. Quantum error-correction codes take advantage of these other qubits to uncover the errors without really resorting to copying the value of the original qubit.

 

The basis of quantum error correction is measuring parity. The parity is defined to be “0” if both qubits have the same value and “1” if they have different values. Crucially, it can be determined without actually measuring the values of both qubits. The computer scientists Dorit Aharonov and Michael Ben-Or (and other researchers working independently) proved a year later that these codes could theoretically push error rates close to zero.

 

Scientists at the Yale University showed that it is possible to track quantum errors in real-time. The team used ancilla or a more stable reporter atom that detected errors in the system without actually disturbing any qubits. During the experiment, researchers used a superconducting box. The box had the reporter atom as well as an unknown number of photons, which were cooled to about negative 459°F, a fraction of a degree above absolute zero. The ancilla just reports photon parity – whether there was a change from even to odd/odd to even photons in the box – and not exact numbers, according to Researchers.

 

Yale scientists, with partial funding from the US Army, set out to figure out if there was any way to come up with a sort of early-warning system for quantum jumps. Not only did they succeed, but they even managed to reverse the jumps and stop unwanted outcomes. According to their research paper, recently published in Nature, this turns a century of quantum mechanics research on its head: The experimental results demonstrate that the evolution of each completed jump is continuous, coherent and deterministic. We exploit these features, using real-time monitoring and feedback, to catch and reverse quantum jumps mid-flight—thus deterministically preventing their completion. Here’s the Yale team’s conclusion: Our findings … should provide new ground for the exploration of real-time intervention techniques in the control of quantum systems, such as the early detection of error syndromes in quantum error correction.

 

Researchers have been developing quantum error correction code that would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time. A study led by physicists at Swansea University in Wales, carried out by an international team of researchers and published in the journal Physical Review X shows that ion-trap technologies available today are suitable for building large-scale quantum computers. The scientists introduce trapped-ion quantum error correction protocols that detect and correct processing errors.

 

An undergraduate student at the University of Sydney has made a breakthrough in quantum computing error correction that is drawing international attention. The error-correction process, described in an article co-written by Bonilla that was published in Nature in April 2021, has been incorporated by Amazon Web Services (AWS) as it develops quantum computing capabilities.

 

Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.

 

Surface Code Architecture

One of the most promising QECs is Surface Code but other small QEC Codes are becoming increasingly popular for the short term.The  surface code is ideal for superconducting quantum computers, like the ones being built by companies including Google and IBM. The code is designed for qubits that are arranged in a 2-D grid in which each qubit is directly connected to neighboring qubits. That, conveniently, is the way superconducting quantum computers are typically laid out.

 

Surface code requires that different qubits have different jobs. Some are data qubits, which store information, and others are helper qubits, called ancillas. Measurements of the ancillas allow for checking and correcting of errors without destroying the information stored in the data qubits. The data and ancilla qubits together make up one logical qubit with, hopefully, a lower error rate. The more data and ancilla qubits that make up each logical qubit, the more errors that can be detected and corrected.

 

The Google and UCSB team eventually hope to build a 2-D surface code architecture based on a checkerboard arrangement of qubits, so that “white squares” would represent the data qubits that perform operations and “black squares” would represent measurement qubits that can detect errors in neighboring qubits. The “measurement” qubits are entangled with neighboring “data” qubits, share information through a quantum connection.

 

In 2015, Google researchers and colleagues performed a simplified version of the surface code, using nine qubits arranged in a line. That setup, reported in Nature, could correct a type of error called a bit-flip error, akin to a 0 going to a 1. A second type of error, a phase flip, is unique to quantum computers, and effectively inserts a negative sign into the mathematical expression describing the qubit’s state.

 

Now, researchers are tackling both types of errors simultaneously. Andreas Wallraff, a physicist at ETH Zurich, and colleagues showed that they could detect bit- and phase-flip errors using a seven-qubit computer. They could not yet correct those errors, but they could pinpoint cases where errors occurred and would have ruined a calculation, the team reported in a paper published in Nature Physics. That’s an intermediate step toward fixing such errors.

 

The surface code architecture allows a lower accuracy of quantum logic operations, 99 percent instead of 99.999 percent in other quantum error-correction schemes. IBM researchers have also done pioneering work in making surface-code error correction work with superconducting qubits. One IBM group demonstrated a smaller three-qubit system capable of running surface code, although that system had a lower accuracy—94 percent.

 

Quantum fault-tolerance theorem puts limit on surface code

But to move forward, researchers need to scale up. The minimum number of qubits needed to do the real-deal surface code is 17. With that, a small improvement in the error rate could be achieved, theoretically. But in practice, it will probably require 49 qubits before there’s any clear boost to the logical qubit’s performance. That level of error correction should noticeably extend the time before errors overtake the qubit. With the largest quantum computers now reaching 50 or more physical qubits, quantum error correction is almost within reach.

 

In quantum computing, the (quantum) threshold theorem (or quantum fault-tolerance theorem), proved by Michael Ben-Or and Dorit Aharonov (along with other groups), states that a quantum computer with a physical error rate below a certain threshold can, through application of quantum error correction schemes, suppress the logical error rate to arbitrarily low levels.

 

Current estimates put the threshold for the surface code on the order of 1%, though estimates range widely and are difficult to calculate due to the exponential difficulty of simulating large quantum systems.  At a 0.1% probability of a depolarizing error, the surface code would require approximately 1,000-10,000 physical qubits per logical data qubit, though more pathological error types could change this figure drastically.

 

The error-correction method scientists choose must not introduce more errors than it corrects, and it must correct errors faster than they pop up. But according to a concept known as the threshold theorem, discovered in the 1990s, below a certain error rate, error correction can be helpful. It won’t introduce more errors than it corrects. That discovery bolstered the prospects for quantum computers. According to leading quantum information theorist Scott Aaronson: “The entire content of the Threshold Theorem is that you’re correcting errors faster than they’re created. That’s the whole point, and the whole non-trivial thing that the theorem shows. That’s the problem it solves.”

 

“The fact that one can actually hope to get below this threshold is one of the main reasons why people started to think that these computers could be realistic,” says Aharonov, one of several researchers who developed the threshold theorem.

 

IBM is also working to build a better qubit. In addition to the errors that accrue while calculating, mistakes can occur when preparing the qubits, or reading out the results, says physicist Antonio Córcoles of IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. He and colleagues demonstrated that they could detect errors made when preparing the qubits, the process of setting their initial values, the team reported in 2017 in Physical Review Letters. Córcoles looks forward to a qubit that can recover from all these sorts of errors. “Even if it’s only a single logical qubit — that will be a major breakthrough,” Córcoles says.

 

Currently, groups are modifying the material properties of their qubits, improving lithography techniques and improving pulse-shaping techniques to make qubit lifetimes longer. This should increase the fidelity of the qubits and make implementing a surface code less resource-intensive.

 

Repetitive error correction

Researchers at the University of California, Santa Barbara (UCSB) and Google have demonstrated repetitive error correction in an integrated quantum device that consists of nine superconducting qubits. “Our nine-qubit system can protect itself from bit errors that unavoidably arise from noise and fluctuations from the environment in which the qubits are embedded,” explains team member Julian Kelly.

 

The researchers repetitively measured the parity between adjacent “data” qubits by making use of “measurement” qubits. “Each cycle, these measurement qubits interact with their surrounding data qubits using quantum logic gates and we can then measure them,” Kelly explains. “When an error occurs, the parity changes accordingly and the measurement qubit reports a different outcome. By tracking these outcomes, we can figure out when and where a bit error has occurred and correct for it.”

 

The more qubits that are involved in the process, the more information is available to identify and correct for errors, explains team member Austin Fowler. “Errors can occur at any time and in all types of qubits: data qubits, measurement qubits, during gate operation and even during measurements. We found that a five-qubit device is robust to any type of bit error occurring anywhere during an algorithm, but a nine-qubit device is better because it is robust to any combination of two-bit errors.”

 

Trapping ions in a maze

Scientists have developed comparable schemes for quantum computers, where quantum information is encoded in several entangled physical quantum bits.”Here we exploit quantum mechanical properties for error detection and correction,” explains Markus Müller from Swansea University, Wales. “If we can keep the noise below a certain threshold, we will be able to build quantum computers that can perform quantum computations of arbitrary complexity by increasing the number of entangled quantum bits accordingly.”

 

Markus Müller and his colleague Alejandro Bermudez Carballo explain that in order to achieve this goal, the capabilities of the technological platforms have to be optimally exploited. “For beneficial error correction we need quantum circuits that are stable and work reliably under realistic conditions even if additional errors occur during the error correction,” explains Bermudez. They introduced new variants of fault-tolerant protocols and investigated how these can be implemented with currently available operations on quantum computers.

 

The researchers found that a new generation of segmented ion traps offers ideal conditions for the process: Ions can be shuttled quickly across different segments of the trap array. Precisely timed processes allow parallel operations in different storage and processing regions. By using two different types of ions in a trap, scientists may use one type as carriers of the data qubits while the other one may be used for error measurement, noise suppression and cooling.

 

Building on the experimental experience of research groups in Innsbruck, Mainz, Zurich und Sydney the researchers defined criteria that will allow the scientists to determine whether the quantum error correction is beneficial. By using this information they can guide the development of future ion-trap quantum computers with the goal to realize a logical quantum bit in the near future that, owed to error correction, exceeds the properties of a pure physical quantum bit.

 

Simon Benjamin’s research group at the University of Oxford showed through complex numerical simulations of the new error correction protocols how the hardware of next generation ion-trap quantum computers has to be built to be able to process information fault-tolerantly. “Our numerical results clearly show that state-of-the-art ion-trap technologies are well suited to serve as platforms for constructing large-scale fault-tolerant quantum computers,” explains Benjamin.

AWS to use Syd Uni undergrad’s error-correction technique, reported in April 2021

Surface codes are an architecture of quantum computational design in which a set of adjacent qubits are used to ensure the main stack of logical quantum gates does not decohere or fall into error. The result is a system in which much of the information being processed is dedicated to error correction, rather than working on the computational task.

 

Bonilla and his tenured co-authors have created a more scalable surface code – the “XZZX” surface code – that will be able to counteract quantum decoherence as more qubits and gates are added to the system. Professor Stephen Bartlett, one of Bonilla’s co-authors, said he expects the new error-correction model to be widely used in future quantum computing experiments. “What’s great about this design is that we can effectively retrofit it to the surface codes being developed across the industry,” he said. “We are optimistic that this work will help the industry build better experimental devices.”

 

Senior quantum research scientist at AWS, Dr Earl Campbell, said he was “surprised” by the new method of error correction. “I was amazed that such a slight change to a quantum error correction code could lead to such a big impact in predicted performance,” he said.

 

Researchers prevent quantum errors from occurring by continuously watching a quantum system

A team led by Tim Taminiau managed to suppress quantum errors through the so-called quantum Zeno effect. A team of scientists led by Tim Taminiau of QuTech, the quantum institute of TU Delft and TNO, has now experimentally demonstrated that errors in quantum computations can be suppressed by repeated observations of quantum bits. If an observable of a quantum state is measured, the system is projected into an eigenstate of this observable. For example, if a qubit in a superposition of ‘0’ and ‘1’ is observed, the qubit is projected into either ‘0’ or ‘1’ and will remain frozen in that state under repeated further observations.

Joint observables

While just freezing a quantum state by projecting a single qubit does not allow for computations, new opportunities arise when observing joint properties of multi-qubit systems. The projection of joint properties of qubits can be explained with the following analogy: consider grouping three-dimensional objects based on their two-dimensional projection. Shapes can still transform within a subgroup (for example between a cube and a cylinder), but unwanted changes (for example to a sphere) are suppressed by the constant observations of the 2D projection. Similarly, the projection of joint observables in multi-qubit systems generates quantum subspaces. In this way, unwanted evolution between different subspaces can be blocked, while the complex quantum states within one subspace allow for quantum computations.

Diamond

The QuTech scientists experimentally generated quantum Zeno subspaces in up to three nuclear spins in diamond. Joint observables on these nuclear spins are projected via a nearby electronic spin, generating protected quantum states in Zeno subspaces. The researchers show an enhancement in the time that quantum information is protected with increasing number of projections and derive a scaling law that is independent of the number of spins. The presented work allows for the investigation of the interplay of frequent observations and various noise environments. Furthermore, the projection of joint observables is the basis of most quantum error correction protocols, which are essential for useful quantum computations.

New Quantum error correction Protocol corrects virtually all errors in quantum memory but requires little measure of quantum states.

The ideal quantum error correction code would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time. But until now, codes that could make do with limited measurements could correct only a limited number of errors — one roughly equal to the square root of the total number of qubits. So they could correct eight errors in a 64-qubit quantum computer, for instance, but not 10.

 

In a paper they’re presenting at the Association for Computing Machinery’s Symposium on Theory of Computing in June, researchers from MIT, Google, the University of Sydney, and Cornell University present a new code that can correct errors afflicting — almost — a specified fraction of a computer’s qubits, not just the square root of their number. And for reasonably sized quantum computers, that fraction can be arbitrarily large — although the larger it is, the more qubits the computer requires.

 

Quantum computation is a succession of states of quantum bits. The bits are in some state; then they’re modified, so that they assume another state; then they’re modified again; and so on. The final state represents the result of the computation. In their paper, Harrow and his colleagues assign each state of the computation its own bank of qubits; it’s like turning the time dimension of the computation into a spatial dimension. Suppose that the state of qubit 8 at time 5 has implications for the states of both qubit 8 and qubit 11 at time 6. The researchers’ protocol performs one of those agreement measurements on all three qubits, modifying the state of any qubit that’s out of alignment with the other two.

 

Since the measurement doesn’t reveal the state of any of the qubits, modification of a misaligned qubit could actually introduce an error where none existed previously. But that’s by design: The purpose of the protocol is to ensure that errors spread through the qubits in a lawful way. That way, measurements made on the final state of the qubits are guaranteed to reveal relationships between qubits without revealing their values. If an error is detected, the protocol can trace it back to its origin and correct it.

 

It may be possible to implement the researchers’ scheme without actually duplicating banks of qubits. But, Harrow says, some redundancy in the hardware will probably be necessary to make the scheme efficient. How much redundancy remains to be seen: Certainly, if each state of a computation required its own bank of qubits, the computer might become so complex as to offset the advantages of good error correction.

 

But, Harrow says, “Almost all of the sparse schemes started out with not very many logical qubits, and then people figured out how to get a lot more. Usually, it’s been easier to increase the number of logical qubits than to increase the distance — the number of errors you can correct. So we’re hoping that will be the case for ours, too.”

 

Stephen Bartlett, a physics professor at the University of Sydney who studies quantum computing, doesn’t find the additional qubits required by Harrow and his colleagues’ scheme particularly daunting. “It looks like a lot,” Bartlett says, “but compared with existing structures, it’s a massive reduction. So one of the highlights of this construction is that they actually got that down a lot.”

Machine learning tackles quantum error correction

Recently researchers have been using machine learning assisted quantum error correction. , which is used to design noise-tolerant quantum computing protocols. In a new study, they have demonstrated that a type of neural network called a Boltzmann machine can be trained to model the errors in a quantum computing protocol and then devise and implement the best method for correcting the errors.

 

Swedish researchers have developed an error decoder based on artificial intelligence. “We use deep reinforcement learning, which is the same framework that has recently achieved super-human performance in playing computer and board games. By exploration, experience is gathered and used to train an artificial neural network that can suggest the best error correction to perform for any given syndrome. Our results show that it is possible for a self-trained agent without supervision or support algorithms to find a decoding scheme that performs on par with hand-made algorithms, opening up for future machine engineered decoders for more general types of noise and error correcting codes,” write Philip Andreasson, and others from Department of Physics, University of Gothenburg.

 

Q-CTRL, a startup out of Australia is building software to help reduce noise and errors on quantum computing machines. It is designing firmware for computers and other machines (such as quantum sensors) that perform quantum calculations, firmware to identify the potential for errors to make the machines more resistant and able to stay working for longer . “Q-CTRL impressed us with their strategy; by providing infrastructure software to improve quantum computers for R&D teams and end-users, they’re able to be a central player in bringing this technology to reality,” said Tushar Roy, a partner at Square Peg. “Their technology also has applications beyond quantum computing, including in quantum-based sensing, which is a rapidly-growing market.

 

Demonstrating error correction that actually works is the biggest remaining challenge for building a quantum computer and  critical to solving if quantum computing ever hopes to make the leap out of the lab and into wider use in the real world.

 

 

The physicists, Giacomo Torlai and Roger G. Melko at the University of Waterloo and the Perimeter Institute for Theoretical Physics, have published a paper on the new machine learning algorithm in a recent issue of Physical Review Letters.

 

“The idea behind neural decoding is to circumvent the process of constructing a decoding algorithm for a specific code realization (given some approximations on the noise), and let a neural network learn how to perform the recovery directly from raw data, obtained by simple measurements on the code,” Torlai told Phys.org. “With the recent advances in quantum technologies and a wave of quantum devices becoming available in the near term, neural decoders will be able to accommodate the different architectures, as well as different noise sources.”

 

As the researchers explain, a Boltzmann machine is one of the simplest kinds of stochastic artificial neural networks, and it can be used to analyze a wide variety of data. Neural networks typically extract features and patterns from raw data, which in this case is a data set containing the possible errors that can afflict quantum states.

 

Once the new algorithm, which the physicists call a neural decoder, is trained on this data, it is able to construct an accurate model of the probability distribution of the errors. With this information, the neural decoder can generate the appropriate error chains that can then be used to recover the correct quantum states. The researchers tested the neural decoder on quantum topological codes that are commonly used in quantum computing, and demonstrated that the algorithm is relatively simple to implement. Another advantage of the new algorithm is that it does not depend on the specific geometry, structure, or dimension of the data, which allows it to be generalized to a wide variety of problems.

 

In the future, the physicists plan to explore different ways to improve the algorithm’s performance, such as by stacking multiple Boltzmann machines on top of one another to build a network with a deeper structure. The researchers also plan to apply the neural decoder to more complex, realistic codes.

 

“So far, neural decoders have been tested on simple codes typically used for benchmarks,” Torlai said. “A first direction would be to perform error correction on codes for which an efficient decoder is yet to be found, for instance Low Density Parity Check codes. On the long term I believe neural decoding will play an important role when dealing with larger quantum systems (hundreds of qubits). The ability to compress high-dimensional objects into low-dimensional representations, from which stems the success of machine learning, will allow to faithfully capture the complex distribution relating the errors arising in the system with the measurements outcomes.”

 

Technical University Of Denmark: Optical Chip Protects Quantum Technology From Errors, reported in Sep 2021

Researchers from DTU Fotonik have co-created the largest and most complex photonic quantum information processor to date – on a microchip. It uses single particles of light as its quantum bits, and demonstrates a variety of error-correction protocols with photonic quantum bits for the first time.

 

“We made a new optical microchip that processes quantum information in such a way that it can protect itself from errors using entanglement. We used a novel design to implement error correction schemes, and verified that they work effectively on our photonic platform,” says Jeremy Adcock, postdoc at DTU Fotonik and co-author of the Nature Physics paper.

 

“Error correction is key to developing large-scale quantum computers” Jeremy Adcock, postdoc at DTU Fotonik
“Chip-scale devices are an important step forward if quantum technology is going to be scaled up to show an advantage over classical computers. These systems will require millions of high-performance components operating at the fastest possible speeds, something that is only achieved with microchips and integrated circuits, which are made possible by the ultra-advanced semiconductor manufacturing industry,” says co-author Yunhong Ding, senior researcher at DTU Fotonik.

To realize quantum technology that goes beyond today’s powerful computers requires scaling this technology further. In particular, the photon (particles of light) sources on this chip are not efficient enough to build quantum technology of useful scale.

 

“At DTU, we are now working on increasing the efficiency of these sources – which currently have an efficiency of just 1 per cent – to near-unity. With such a source, it should be possible to build quantum photonic devices of vastly increased scale, and reap the benefits of quantum technology’s native physical advantage over classical computers in processing, communicating, and acquiring information, says postdoc at DTU Fotonik, Jeremy Adcock. He continues: “With more efficient photon sources, we will be able to build more and different resource states, which will enable larger and more complex computations, as well as unlimited range secure quantum communications.”

 

 

 

References and Resources also include:

https://www.sciencenews.org/article/quantum-computers-hype-supremacy-error-correction-problems?

https://indiaeducationdiary.in/technical-university-of-denmark-optical-chip-protects-quantum-technology-from-errors/

 

About Rajesh Uppal

Check Also

DARPA 3DSoC developed high performance 3D ICs based on CNT FET for future DOD computation systems

Deployed electronic systems increasingly require advanced processing capabilities, however the time and power required to …

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!