Home / Technology / Manufacturing / DARPA establish Quantum Computing Application Metrics, and Benchmarks for guiding progress of Quantum computers

DARPA establish Quantum Computing Application Metrics, and Benchmarks for guiding progress of Quantum computers

Quantum computers shall bring power of massive parallel processing, equivalent of supercomputer to a single chip. They can consider different possible solutions to a problem simultaneously, quickly converge on the correct solution without check each possibility individually. This dramatically speed up certain calculations, such as number factoring.

 

Quantum computers especially Digital quantum computer (also called a gate-level quantum computer),  is universal, programmable computer that can execute all quantum algorithms and have numerous applications.

 

The power of quantum computers depends on the number of qubits and their quality measured by coherence, and gate fidelity. Qubit is very fragile, can be disrupted by things like tiny changes in temperature or very slight vibrations. Coherence measures the time during which quantum information is preserved. The gate fidelity uses distance from ideal gate to decide how noisy a quantum gate is.

 

Although universal fault-tolerant quantum computers – with millions of physical quantum bits (or qubits) – maybe a decade or two away, quantum computing research continues apace. It has been hypothesized that quantum computers will one day revolutionize information processing across a host of military and civilian applications from pharmaceuticals discovery, to advanced batteries, to machine learning, to cryptography.

 

Many different groups have claimed to have achieved “quantum supremacy” – the ability to repeatably perform a computation that is unrealistic for classical systems to replicate. In addition, multiple commercial companies have published roadmaps showing that they will create universal, fault-tolerant quantum computers in the next decade. The extent to which these roadmaps, if realized, will represent significant and important new computational capabilities is not currently understood.

 

A key missing element in the race toward fault-tolerant quantum systems, however, is meaningful metrics to quantify how useful or transformative large quantum computers will actually be once they exist.

 

Beyond quantum supremacy, the next major benchmark, called quantum advantage, is on the distant horizon. Quantum advantage will exist when programmable NISQ quantum gate-based or circuit-based computers reach a degree of technical maturity that allows them to solve many, but not necessarily all, significant real-world problems that classical computers can’t solve, or problems that classical machines require an exponential amount of time to solve.

 

“Quantum Benchmarking is focused on the fundamental question: How will we know whether building a really big fault-tolerant quantum computer will revolutionize an industry?” Altepeter said. “Companies and government researchers are poised to make large quantum computing investments in the coming decades, but we don’t want to sprint ahead to build something and then try to figure out afterward if it will be useful for anything.”

 

Benchmarks for conventional computers are standardized methods that test and evaluate hardware, software, and systems for computing. The results from these tests are expressed using metrics that measure features and behaviors of the system such as speed and accuracy. With the advent of quantum computers, new benchmarks are needed to address these same metrics while also accounting for differences in the underlying technologies and computational models.

 

Presently we only evaluate quantum computers based on the number of qubits in a quantum computer while ignoring many other important factors affecting its computational ability.  Qubits decohere either due to noise or because of their inherent properties. For those reasons, building quantum computers capable of solving deeper, more complex problems is not just a simple matter of increasing the number of qubits. IBM  researchers have proposed a full-system performance measurement called Quantum Volume.

 

Quantum Volume’s numerical value indicates the relative complexity of a problem that can be solved by the quantum computer. The number of qubits and the number of operations that can be performed are called the width and depth of a quantum circuit. The deeper the circuit, the more complex of an algorithm the computer can run. Circuit depth is influenced by such things as the number of qubits, how qubits are interconnected, gate and measurement errors, device cross-talk, circuit compiler efficiency, and more. It analyzes the collective performance and efficiency of these factors then produces a single, easy-to-understand Quantum Volume number. The larger the number, the more powerful the quantum computer.

 

However, there are no Application-based metrics and benchmarks that measure performance on real-world applications and workloads. We have almost no idea what near-future quantum processors might be useful for! It’s impossible to define good application-based benchmarks today — but it is a great and urgent topic for speculative research & exploration!

 

To provide standards against which to measure quantum computing progress and drive current research toward specific goals, DARPA announced its Quantum Benchmarking program. Its aim is to re-invent key quantum computing metrics, make those metrics testable, and estimate the required quantum and classical resources needed to reach critical performance thresholds.

 

Coming up with effective metrics for large quantum computers is no simple task. Current quantum computing research is heavily siloed in companies and institutions, which often keep their work confidential. Without commonly agreed-on standards to quantify the utility of a quantum “breakthrough,” it’s hard to know the value quantum research dollars are achieving. Quantum Benchmarking aims to predict the utility of quantum computers by attempting to solve three hard problems:

 

The first is reinventing key metrics. Quantum computer experts are not experts in the systems quantum computers will replace, so new communities will need to be built to calculate the gap between the current state of the art and what quantum is capable of. Hundreds of applications will need to be distilled into 10 or fewer benchmarks, and metrics will need to have multi-dimensional scope.

 

The second challenge is to make metrics testable by creating “wind tunnels” for quantum computers, which currently don’t exist. Researchers will need to enable robust diagnostics at all scales, in order to benchmark computations that are classically intractable.

 

A third challenge is to estimate the required quantum and classical resources for a given task. Researchers will need to optimize and quantify high-level resources, which are analogous to the front-end compiler of a classical computer. They will need to map high-level algorithms to low-level hardware, akin to the back-end compiler of a classical computer. Finally, they will need to optimize and quantify low-level resources, which corresponds to transistors, gates, logic, control, and memory of classical computers.

 

DARPA is currently pursuing early wins in quantum computers by developing hybrid classical/intermediate-size “noisy” quantum systems that could leapfrog purely classical supercomputers in solving certain types of military-relevant problems. Quantum Benchmarking builds on this strong quantum foundation to create standards that will help direct future investments.

 

Quantum Benchmarking program

It has been credibly hypothesized that quantum computers will revolutionize multiple scientific and technical fields within the next few decades; examples include machine learning, quantum chemistry, materials discovery, molecular simulation, many-body physics, classification, nonlinear dynamics, supply chain optimization, drug discovery, battery catalysis, genomic analysis, fluid dynamics, protein structure prediction.

 

For many of these examples, like quantum chemistry and protein structure prediction, quantum computers are hypothesized to be useful simulators because the target problem is inherently quantum mechanical. Other examples, like classification and nonlinear dynamics, center around problems that have nothing to do with quantum systems, but involve combinatorial complexity that is intractable for conventional computers.

 

For each of the fields listed above, it is unclear exactly what size, quality, and configuration of a quantum computer – if any – will enable the hypothesized revolutionary advances. This lack of clarity may be the result of one or more of the following factors:

  • Where only the technical field has been identified, the specific application instances that would be solved by a hypothetical quantum computer – at specific scales, with specifically identified values for key parameters, and with clearly identified impact if successful – have not been posed.
  • Where application instances have been posed, the new core computational capability that would enable success is not understood. This often contributes to a lack of understanding about the gap between existing classical, state-of-the-art solutions and hypothesized quantum solutions.
  • The appropriate metrics and testing procedures for quantifying progress towards critical new quantum computing capabilities are not known. This is especially problematic for problems where the testing procedures themselves may be classically intractable.
  • Where benchmarks for quantum utility have been proposed, they are often distilled down to a single parameter that gives limited insight into the ability of a system under test to succeed at specific application instances. In almost all cases, it is not known how to measure hardware progress towards a specific application at a specific scale, especially using robust multi-dimensional metrics suitable for driving research and development into special-purpose hardware.
  • The full-system-hardware resources required to solve particular problems at specific scales have not been estimated. This is particularly true where large, fault-tolerant quantum computers are expected to be required. When quantum hardware resources have been estimated, only the exponential scaling term(s) have been quantified and not the constant and polynomial scaling terms. The ancillary classical resources and low-level hardware configurations (e.g., connectivity requirements) that are required are either unaddressed or cursorily addressed.

 

The Quantum Benchmarking program will create new benchmarks that quantitatively measure progress towards specific, transformational computational challenges. In parallel, the program will estimate the hardware-specific resources required to achieve different levels of benchmark performance. The benchmarks will be hardware agnostic. This is essential for benchmarks that are focused on measuring utility since a novel classical solution to an urgent problem is just as valuable as a novel quantum solution to an urgent problem. However, work in this program will focus on creating hardware-agnostic benchmarks for problems where quantum approaches are most likely to be needed.

 

Compiling a list of application instances.

The Quantum Benchmarking program will compile a list of specific application instances from across as many application domains as possible. These application instances are the answers to the question: “If you had a large, perfect quantum computer today, what would you ask it to do?” Each application instance will be a specific problem at a specific scale. For example, one application instance could be estimating the ground state energy of a particular molecule in a particular configuration.

Grouping application instances.

The Quantum Benchmarking program will group application instances according to their core enabling computational capabilities. Because the primary goal of the program is to estimate the long-term utility of quantum computers, it is crucial to uncover the core enabling computational capabilities for the application instances that have been compiled. After grouping instances, performers will determine the key metrics that can be used to quantify these core enabling computational capabilities, e.g., the precision or accuracy of a specific class of matrix operation.

Developing test procedures.

The Quantum Benchmarking program will discover novel methods for testing and predicting performance against the key metrics that quantify core enabling computational capabilities. The Quantum Benchmarking program recognizes that some metrics for measuring quantum computational capability may not be testable using finite classical resources. Instead, a new type of quantum device, referred to here as a quantum benchmarking testbed (QBT), may be needed to test certain metrics. A QBT would act as a synthetic problem with tunable size, complexity, and key parameters and serve as a simulation target. If a quantum computer under test can correctly simulate the behavior of the QBT, it passes that benchmarking challenge. Note that if all key metrics associated with a particular grouping of application instances can be tested using existing or realizable classical resources, then those classical resources may be the best means for testing progress toward realizing the core enabling computational capabilities.

 

Creating benchmarks.

The Quantum Benchmarking program will create benchmarks that can act as guidestars for research and development into quantum computation. More specifically, the Quantum Benchmarking program will create scalable and predictive benchmarks that can not only make it clear when a particular performance threshold has been reached, but also quantify progress towards important performance thresholds. The Quantum Benchmarking program will create robust and multi-dimensional benchmarks that embrace problem complexity by having many input parameters to define problem scope and scale, and by having even more output parameters that provide rich debugging information about not just if a system under test succeeded or failed but exactly how it succeeded or failed along as many relevant axes as possible. If successful, the Quantum Benchmarking program will provide benchmarks that provide rich debugging information for quantum computer developers.

Estimating hardware resources.

The Quantum Benchmarking program will create tools for estimating the computational-paradigm-specific hardware resources needed to achieve specific benchmark performance thresholds. Existing estimates of resource scaling with problem size are often limited to the leading term in an asymptotic expansion of problem complexity. In some cases, quantum computers are assumed to be useful if quantum advantage scales exponentially with problem size. Of course, if the constant and polynomial scaling terms outweigh the exponential scaling terms for problem sizes of interest, quantum advantage may not exist.

 

The Quantum Benchmarking program will provide estimates of these additional scaling terms by predicting not just the quantum resources needed to achieve a new computational capability but the ancillary classical resources (for example, from decoders and schedulers) needed to support the proposed quantum system. Estimates will necessarily be tied to a particular quantum computing technology, e.g., superconducting quantum computers or photonic quantum computers because the hardware resources being estimated will vary dramatically with different hardware paradigms.

 

“It’s really about developing quantum computing yardsticks that can accurately measure what’s important to focus on in the race toward large, fault-tolerant quantum computers,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “Building a useful quantum computer is really hard, and it’s important to make sure we’re using the right metrics to guide our progress towards that goal. If building a useful quantum computer is like building the first rocket to the moon, we want to make sure we’re not quantifying progress toward that goal by measuring how high our planes can fly.”

 

Awards

Defense Advanced Research Projects Agency (DARPA) announced contracts in February 2022 to Raytheon BBN in Cambridge, Mass., and to the University of Southern California (USC) in Los Angeles for the Quantum Benchmarking program. Raytheon BBN and USC will investigate application-specific and hardware-agnostic benchmarks to test for the best applications of quantum computers.

 

Raytheon BBN won a $2.9 million contract on 24 Feb. 2022, and USC won a $4.1 million contract on 23 Feb. 2022 for the DARPA Quantum Benchmarking program. Raytheon BBN and USC will focus on two technical areas: hardware-agnostic approaches, and hardware-specific approaches. In creating these benchmarks, the two organizations are charged with: developing test procedures for quantifying progress in research; creating scalable multi-dimensional benchmarks; developing tools for estimating necessary quantum hardware resources for hard-to-achieve military capabilities; analyzing applications that require large-scale, universal, fault-tolerant quantum computers; and estimating the necessary levels of the classical and quantum resources in order to execute quantum algorithms on large-scale.

 

Raytheon BBN and USC also will develop test procedures for quantifying progress in research; create scalable multi-dimensional benchmarks; and develop tools for estimating necessary quantum hardware resources for hard-to-achieve military capabilities.

 

The two organizations will analyze applications that require large-scale, universal, fault-tolerant quantum computers; estimates of the classical and quantum resources necessary to execute quantum algorithms on large-scale; applications of fault tolerance and error correction; and nontraditional quantum computing paradigms.

 

About Rajesh Uppal

Check Also

High-Speed Rail (HSR) in Military Logistics: China Leads the Way

Traditional military logistics rely heavily on trucks and airplanes. These methods, while established, have limitations. …

error: Content is protected !!