Quantum technologies offer ultra-secure communications, sensors of unprecedented precision, and computers that are exponentially more powerful than any supercomputer for a given task. Richard Feynman’s original vision for quantum computing sprang from the insight that there are hard problems, e.g. in quantum physics, quantum chemistry, and materials, that are nearly intractable using classical computation platforms but that might be successfully modeled using a universal quantum computer. A universal fault-tolerant quantum computer that can solve efficiently problems such as integer factorization and unstructured database search requires millions of qubits with low error rates and long coherence times.

The field of Quantum Computing (QC) has seen considerable progress in recent years, both in the number of qubits that can be physically realized and in the formulation of new quantum search and optimization algorithms. However, numerous challenges remain to be solved to usefully employ QC to solve real world problems. These include challenges of scale, environmental interactions, input/output, qubit connectivity, quantum memory (or lack thereof), quantum state preparation and readout, and numerous other practical and architectural challenges associated with interfacing to the classical world.

While the experimental advancement towards realizing such devices will potentially take decades of research, noisy intermediate-scale quantum (NISQ) computers already exist. These computers are composed of hundreds of noisy qubits, i.e. qubits that are not error-corrected, and therefore perform imperfect operations in a limited coherence time.

John Preskill, the theoretical physicist at Caltech, coined the term NISQ for a keynote speech he delivered at Quantum Computing for Business on 5 December 2017. “We are now entering a pivotal new era in quantum technology, wrote Preskill, adding, “for this talk, I needed a name to describe this impending new era, so I made up a word: NISQ. This stands for Noisy Intermediate Scale Quantum.”

“Here ‘intermediate scale’ refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred. Fifty qubits is a significant milestone, because that’s beyond what can be simulated by brute force using the most powerful existing digital supercomputers.”

“Noisy emphasizes that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.“We shouldn’t expect NISQ to change the world by itself; instead it should be regarded as a step toward more powerful quantum technologies we’ll develop in the future.“I do think that quantum computers will have transformative effects on society eventually, but these may still be decades away. We’re just not sure how long it’s going to take.”

In the search for quantum advantage with these devices, algorithms have been proposed for applications in various disciplines spanning physics, machine learning, quantum chemistry and combinatorial optimization. DARPA is looking to exploit quantum information processing before fully fault-tolerant quantum computers exist. Fault-tolerant means that if one part of the computer stops working properly, it can still continue to function without going completely haywire. On February 27, 2019 DARPA announced its Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program.

### Computing with Noisy Intermediate Scale Quantum computers (NISQ)

DARPA seeks to challenge the community to address the fundamental limits of quantum computing and to identify where quantum computing can relevantly address hard science and technology problems, thus realizing Feynman’s original vision. Both near-term (next few years) and longer-term (next few decades) capabilities and their limitations are of interest. The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is seeking information on new capabilities that could be enabled by current and next-generation quantum computers for understanding complex physical systems, improving artificial intelligence (AI) and machine learning (ML), and enhancing distributed sensing.

The principal objective of the ONISQ program is to demonstrate quantitative advantage of Quantum Information Processing (QIP) over the best classical methods for solving combinatorial optimization problems using Noisy Intermediate-Scale Quantum (NISQ) devices. In addition, the program will identify families of problem instances in combinatorial optimization where QIP is likely to have the biggest impact.

Also of interest to this RFI, is the possibility of adapting to classical computers some of the techniques that are being developed for handling quantum data (both at the algorithm level as well as protocols for loading, storing and transferring data). These “quantum inspired” approaches may provide novel capabilities in terms of efficiency and speed.

Los Alamos National Laboratory is developing a method to invent and optimize algorithms that perform useful tasks on noisy quantum computers. The main idea is to reduce the number of gates in an attempt to finish execution before decoherence and other sources of errors have a chance to unacceptably reduce the likelihood of success. They use machine learning to translate, or compile, a quantum circuit into an optimally short equivalent that is specific to a particular quantum computer. Until recently, we have employed machine-learning methods on classical computers to search for shortened versions of quantum programs. Now, in a recent breakthrough, we have devised an approach that uses currently available quantum computers to compile their own quantum algorithms. That will avoid the massive computational overhead required to simulate quantum dynamics on classical computers.

Because this approach yields shorter algorithms than the state of the art, they consequently reduce the effects of noise. This machine-learning approach can also compensate for errors in a manner specific to the algorithm and hardware platform. It might find, for instance, that one qubit is less noisy than another, so the algorithm preferentially uses better qubits. In that situation, machine learning creates a general algorithm to compute the assigned task on that computer using the fewest computational resources and the fewest logic gates. Thus optimized, the algorithm can run longer.

This method, which has worked in a limited setting on quantum computers now available to the public on the cloud, also takes advantage of quantum computers’ superior ability to scale up algorithms for large problems on the larger quantum computers envisioned for the future.

One other approach that has received some attention is the possibility of new capabilities that may be unleashed by the combination of limited quantum computers with either existing quantum sensors or classical computing resources. Such combination might bypass the problems of state preparation and interfacing to classical memory. In this case it has been posited that by aggregating quantum data from distributed sensors, a quantum computer may improve the performance beyond what could be classically achievable.

One possible solution, the researchers suggest, will be to divide up a problem between classical and quantum computers. The classical computers will solve some pieces of the puzzle, and the quantum processors will handle others. Herold describes a theoretical scenario in which a cloud computing resource decides how to divvy up a problem between classical and quantum computers.

“You might have these classical heuristics running and have cloud access to some quantum hardware and then when the classical heuristics struggle, maybe that quantum hardware is utilized for that problem,” Herold posits. “Or, it may be possible to break up problems into chunks and then send some chunks to the quantum processor—the really hard problems—and then put them back together in classical processing afterwards. There are a lot of ways that it could look, and we’re going to be figuring out how best to do that in the next few years.”

### NISQ to attack combinatorial optimization problems

An issue of particular interest is the potential impact of QC on “second wave” AI/ML optimization. ML has shown significant value in a broad range of real world problems, but the training time (due to the size and variety of the data needed for learning) and also network design space (due to a paucity of detailed analysis and theory for ML/deep learning (DL) systems) are large. It has been suggested that QC could significantly decrease training time of currently standard ML approaches by providing quantum speedup on optimization subroutines.

Tatjana Curcic, program manager within DARPA’s Defense Sciences Office, agrees that combinatorial optimization problems are widespread. “Optimization is everywhere. It’s in electronics. It’s in logistics. It’s in how manufacturing works, how you optimize the work process in a manufacturing plant. It’s everywhere,” she says. She also cautions, however, that as a basic research program, ONISQ is not attempting to solve any particular problem. Instead, the goal is to conduct foundational research that scientists can then build upon.

According to DARPA, “Solving combinatorial optimization problems – with their mindboggling number of potential combinations – is of significant interest to the military. “One potential application is enhancing the military’s complex worldwide logistics system, which includes scheduling, routing, and supply chain management in austere locations that lack the infrastructure on which commercial logistics companies depend. ONISQ solutions could also impact machine-learning, coding theory, electronic fabrication, and protein-folding.”

Planning and scheduling also are combinatorial optimization problems. “Let’s say, given a group of nurses in a hospital, how do I meet everyone’s constraints and build a valid schedule where I can cover all of my shifts and deal with everyone who has been on vacation or whatnot?” Herold offers. Herold is part of a GTRI team working with the U.S. Defense Department’s Advanced Research Projects Agency (DARPA) on a new program known as ONISQ, for Optimization with Noisy Intermediate-Scale Quantum devices. He adds that combinatorial optimization problems quickly become too complex for humans. “If you look at really small examples, it feels like doing a puzzle. They’re fun for your brain when they’re small, but rapidly you get to these big problems that are intractable for people to solve on their own.

Showing that quantum systems can perform better than classical computers for combinatorial optimization problems is a serious challenge, Herold says. “That’s a really tall order. We’re starting in this place where we’ve shown control over two and three or four ions, and to meet their metrics and to have enough resources to solve interesting, real-world problems, we need to extend our hardware to have 10 or 20 ions in a year and offer 50 ions the year after that,” Herold adds. “That’s a real engineering challenge for us. It’s not hard to trap those ions, but to actually have control over them and to make use of all of them is really difficult.”

And the competition is stiff. Governments, including the United States, China, Russia, North Korea and most European nations, are racing to gain a quantum computing advantage. Industry also is interested. In the United States alone, Google, IBM, Intel, Microsoft and a host of smaller companies are investing in quantum computing research. “There are tens of hardware computing companies from major corporations to startups that are developing quantum computing hardware and are also racing to really show a useful quantum advantage,” Herold states. “It’s a real sprint.”

### DARPA DSO’s RFI responses may address one or multiple challenge areas

### Challenge 1: Fundamental limits of quantum computing. In order to establish such limits, respondents should address some of the following relevant questions:

o What are the near term wins in mapping QC to hard science modeling problems? We impose no constraints on what is meant by quantum computing; e.g. this could be a collection of physical or logical qubits, a quantum annealing machine, a quantum computational liquid, or some other quantum emulation platform that can serve as a proxy for the system to be modeled.

o Address the questions of scale. How many degrees of freedom in the problem of interest must be mapped to the QC platform to realistically model the system? At what scale do known classical computation platforms and algorithms become inadequate, and what are the potential gains brought by QC?

o How should the problem be framed; i.e. what are the questions to be addressed in modeling the physical system with a QC proxy system, and how should the quantum states be initialized and read out? Are there any new algorithms to usefully map the real-world quantum system to the proxy system?

o What are the known fundamental limitations to QC and scaling, including limits due to decoherence, degeneracy, environmental interactions, input-output limitations, and limited connectivity in the qubit-to-qubit interaction Hamiltonian? How will coherence times scale with the size of the QC system? Discuss error correction techniques and their scaling. How will errors scale with the size of the system? How valid are assumptions of uncorrelated noise?

o What is the real speedup for known QC algorithms (e.g. HHL, Grover), taking into account maximum realizable size N of the system, quantum state preparation and readout, limited connectivity in the Hamiltonian, and interfacing to classical memory and the classical world?

### Challenge 2: Hybrid approaches to machine learning.

We are interested in approaches that dramatically improve the total time taken to construct a high-performing ML/DL solution by leveraging a hybrid quantum/classical computing approach. For example, a hybrid approach may incorporate a small scale quantum computer to efficiently implement specific subroutines that require limited resources in a ML/DL task that is being handled by a classical computer. The challenge here is to identify the best approaches for achieving significant speed up as compared to the capabilities of the best known algorithms that run solely on classical computers. Some of the relevant questions are:

o What approaches can be used to efficiently implement ML/DL tasks using a hybrid quantum/classical system using near term and future QC devices? Are there specific tasks for which such approaches are more beneficial than others?

o How does the speedup depend on the size of the available quantum resources (e.g. number of qubits N)?

o What are the challenges in implementing this idea? For example, what issues have to be dealt with in order to interface quantum and classical resources? Can we efficiently transfer data between the classical and quantum processors in order to see any gains in performance?

o Is there a need to develop additional auxiliary technology to implement such approaches?

### Challenge 3: Interfacing quantum sensors with quantum computing resources. Some of the relevant questions are:

o What new capabilities can be gained through the combination of a quantum computer and distributed quantum sensors? How large does the quantum computer need to be and how well does it need to operate (e.g. how large of a two-qubit gate error can the system tolerate)? How many distributed sensors are needed to see a benefit and what level of performance do they need to have (e.g. operate at the standard quantum limit or near the Heisenberg limit, etc.)?

o What quantum computer platform (e.g. trapped ion qubits, superconducting qubits, etc.) and sensors (atomic clocks, magnetometer, etc.) could potentially be leveraged in this approach?

o What are the potential roadblocks to making a demonstration of this approach possible?

o Are there any auxiliary components that need to be developed prior to making a demonstration of this approach?

o Are there non-performance capabilities to be gained from entangled sensors like security or trust?

o Are there important implications of the location of the sensors (e.g. relativistic effects) or the topology of the devices to realize the potential new capabilities?

### Challenge 4: QC inspired algorithms and processes that are applicable to classical computers.

o What systematic processes can be learned from the QC-inspired algorithms to date? Are there recurring themes and structures that have arisen in these new solutions?

o Are there approaches to identify classical algorithm improvement when it has been shown to have a quantum supremacy approach? In other words, can we predict these kinds of inspirations?

o As we learn about interfacing data and computation from challenges 1, 2, and 3, do we learn better classical architectures for mixing data input/output, memory, and computing together?

### DARPA Awards

The four-year program officially kicked off in March 2020 and is divided into two phases. It includes two kinds of research—hardware and theoretical. Early this year, DARPA awarded three contracts to teams led by the University of Tennessee, Clemson University and Lehigh University to explore the theoretical possibilities of hybrid computers working combinatorial optimization problems. The agency also awarded contracts to teams led by GTRI, Universities Space Research Association (USRA), Presidents & Fellows of Harvard College and ColdQuanta Incorporated to develop quantum-classical computing hardware. Each team is pursuing different potential solutions.

In Technical Area 1, the following performers were selected to demonstrate a hybrid quantum/classical optimization algorithm in a quantum device to solve a specific combinatorial optimization problem:

Georgia Tech Applied Research Corporation

Universities Space Research Association

Presidents & Fellows of Harvard College

ColdQuanta, Inc.

The GTRI team, which includes the National Institutes for Standards and Technology’s Ion Storage Group, is the only team specializing in trapped ion research. “Our project is called Optimization with Trapped Ion Qubits, which has a snappy acronym, OPTIQ,” Herold states.

A couple of years ago, GTRI demonstrated universal control of as many as four qubits and followed that with a demonstration of a small quantum algorithm that Herold describes as a “toy algorithm.” The DARPA program is a natural extension of that previous research. “The goal there is to build out the hardware to the point we have enough ions and control over them that we can actually solve problems which are interesting in the real world and aren’t just toys,” Herold says.

### $2.1M DARPA grant puts Lehigh Univ. optimization experts at vanguard of quantum computing

Lehigh University will soon be on the front lines of the quantum computing revolution. With support from a recently awarded $2,128,658 research grant from the Defense Advanced Research Projects Agency (DARPA), an international group led by industrial and systems engineering (ISE) faculty members Tamás Terlaky, Luis Zuluaga, and Boris Defourny will work on optimization algorithms in quantum computing.

“We want to explore the power of existing quantum computers, and those that are predicted to exist in the future,” says Terlaky, who is a member of the Quantum Computing and Optimization Lab (QCOL) in the P.C. Rossin College of Engineering and Applied Science. The lab was established in 2019 to accelerate the development of quantum computing optimization methodology, and associated faculty launched the university’s first quantum computing course this spring. “We’ll be looking at combinatorial optimization problems for quantum computing with the goal that, in four years, we’ll be able to demonstrate that quantum computers are surpassing the capabilities of classical computers, at least on some problems.”

Terlaky says their work is related to the theory of quantum supremacy, which, very broadly, states that quantum computers will be exponentially better than current silicon computers at quickly solving problems that are unsolvable today. Problems related to fields as diverse as finance, security, genetics, transportation, manufacturing, and machine learning, and that model practical, binary questions such as whether to purchase or not purchase, build or not build, etc. There is a long way to go to achieve that end. Current quantum computers are about where silicone based computer chips were in the 1950s, says Terlaky, who is also affiliated with Lehigh’s Institute for Data, Intelligent Systems, and Computation (I-DISC).

“In the 50s, we had gym-size computers with very little memory, and very little processing power,” he says. “A lot of programming was written in assembly language, getting the machine the codes, and specifying every gate and route for the information. At this point with quantum computers, the programming language is very similar. It’s not a high-level language where you can write a complicated code easily. So all this software has to develop along with the upcoming hardware.” Until recently, he says, most of the work in this area was being done by theoretical physicists, electrical engineers, computer engineers, and theoretical computer scientists. But the theory of quantum supremacy is essentially one big optimization problem.

“And we are the optimizers,” says Terlaky. “Very few people in the optimization community have looked at these problems so far. We are definitely the first sizable group to do so.” Additional researchers involved in the DARPA project include Giacomo Nannicini (IBM T.J. Watson Research Center), Stefan Wild (NAISE, Evanston, IL, and Argonne National Lab), Alain Sarlette (INRIA, Paris, France), Xiu Yang (ISE, Lehigh University), and Monique Laurent (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands). Terlaky says the grant reflects the team’s standing as one of the best in the world at what they do. And he says the collaborative, global reach of the team reflects his own professional ethos.

### Universities Space Research Association to Lead a DARPA Project on Quantum Computing, reported in March 2020

Universities Space Research Association (USRA) today announced that DARPA has awarded the organization and its partners Rigetti Computing and the NASA Quantum Artificial Intelligence Laboratory (QuAIL) to work as a team to advance the state of art in quantum optimization. USRA, as the prime contractor of the award, will manage the collaboration.

The collaboration will focus on developing a superconducting quantum processor, hardware-aware software and custom algorithms that take direct advantage of the hardware advances to solve scheduling and asset allocation problems. In addition, the team will design methods for benchmarking the hardware against classical computers to determine quantum advantage.

USRA Senior Vice President Bernie Seery noted, “This is a very exciting public-private partnership for the development of forefront quantum computing technology and the algorithms that will be used to address pressing, strategically significant challenges. We are delighted to receive this award and look forward to working with our partner institutions to deliver value to DARPA.”

In particular, the work will target scheduling problems whose complexity goes beyond what has been done so far with the quantum approximate optimization algorithm (QAOA). USRA’s Research Institute for Advanced Computer Science (RIACS) has been working on quantum algorithms for planning and scheduling for NASA QuAIL since 2012. “The innovations on quantum gates performed by Rigetti coupled perfectly with the recent research ideas at QuAIL, enabling an unprecedented hardware-theory co-design opportunity” explains Dr. Davide Venturelli, USRA Associate Director for Quantum Computing and project PI for USRA. Understanding how to use quantum computers for scheduling applications could have important implications for national security such as real time strategic asset deployment, as well as commercial applications including global supply chain management, network optimizations or vehicle routing.

### Rigetti Computing Wins $8.6 million DARPA Grant to Demonstrate Practical Quantum Computing

Rigetti Computing has secured an $8.6M contract to help the Defense Advanced Research Projects Agency support a quantum technology research and development effort. The company said Thursday it will work under a collaboration between DARPA, NASA Quantum Artificial Intelligence Laboratory and the Universities Space Research Association to create a quantum-powered full-stack computing system.

The collaboration will focus on developing a superconducting quantum processor, hardware-aware software, and custom algorithms based on real-world scenarios. The work will leverage Rigetti’s Fab-1—the only dedicated quantum integrated circuit foundry in the U.S.—to manufacture chips that scale beyond 100 qubits. In addition, the NASA-USRA team will design methods for benchmarking the hardware against classical computers to determine quantum advantage. The effort aims to help the national security community address scheduling complexities in supply chain management, network activities and other strategic operations.

### Argonne Receives Two Awards from DARPA for Quantum Information Science

The U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago recently received two awards from the Defense Advanced Research Projects Agency (DARPA) in collaboration with industry and academic partners. The awards will fund two multi-year projects in an effort to secure the nation’s leadership in the field of quantum information science.

The DARPA awards are a part of the ONISQ program — Optimization with Noisy Intermediate-Scale Quantum devices — aimed at developing novel quantum algorithms and quantum systems that can scale to hundreds or thousands of qubits with high performance and reliability. The objective is to show the quantum advantage of quantum-hybrid systems over classical systems for a range of difficult combinatorial optimization problems, including resource allocation, logistics and image recognition.

The first award is with ColdQuanta, a quantum atomics company. “With this award, ColdQuanta, Argonne and our other partners will develop a scalable, cold-atom-based quantum computing hardware and software platform, and demonstrate quantum advantage on real-world problems,” said Tom Noël, group leader for quantum computing at ColdQuanta. “We believe our collective team has the expertise and passion to achieve the project objectives and are thrilled to have been awarded the contract from DARPA.”

The second award is with Ilya Safro, an Associate Professor of Computer Science at Clemson University. The goal of the project is to develop a family of hybrid quantum-classical multilevel algorithms for solving efficient combinatorial optimization problems on Noisy Intermediate-Scale Quantum (NISQ) devices. The total award is valued at about $1 million. The total award is valued at about $7.4 million. Argonne’s partners in this collaboration are ColdQuanta, the University of Wisconsin–Madison, Raytheon Technologies, NIST Gaithersburg, University of Colorado Boulder, University of Innsbruck and Tufts University.

“Exploring the ways to tackle combinatorial optimization problems using hybrid quantum-classical algorithms is one of the most exciting research areas of quantum information science, which is aimed at finding practical applications for quantum devices,” Safro said. “The Argonne-Clemson collaboration supported by DARPA will give an excellent opportunity to several students not only to study quantum computing and solve the problems in national security, but also work shoulder-to-shoulder with world-class experts at Argonne National Laboratory.”

“What is particularly exciting about the ONISQ teams is that quantum information scientists will be working side-by-side with experts in classical optimization theory,” said Tatjana Curcic, program manager in DARPA’s Defense Sciences Office. “Together they will investigate where the hybrid quantum/classical approach will have the biggest payoff.”

“We will push the community toward building bigger and better quantum processors—bigger, meaning with more qubits and better, meaning qubits that hopefully are going to be less noisy,” Curcic says. The teams also will have to implement some optimizing algorithms, characterize them and compare the solution to the best-known classical solutions. “The hope there is that we will demonstrate an advantage of quantum processing,” she adds.

### ColdQuanta Cold Atom Quantum Computer Technology

In April 2021, the Defense Advanced Research Projects Agency (DARPA) selected ColdQuanta to develop a scalable, cold-atom-based quantum computing hardware and software platform that can demonstrate quantum advantage on real-world problems. The work is being led by ColdQuanta Chief Scientist Mark Saffman. In October, ColdQuanta announced cloud access to a quantum matter system that lets users generate, manipulate, and experiment with ultracold matter.

ColdQuanta’s leadership team has been active in building the emerging quantum industry. CEO Bo Ewald served as President of D-Wave International, was an early member of the Quantum Industry Coalition, and is currently leading IEEE quantum terminology and performance characterization standardization efforts. Founder and CTO Dana Anderson sits on the Steering Committee of the Quantum Economic Development Consortium (QED-C), established with support from the National Institute of Standards and Technology (NIST) as part of the U.S. government’s strategy for advancing quantum information science.

“ColdQuanta has successfully developed and deployed many kinds of quantum systems, all based on our Quantum Core platform,” said Bo Ewald. “This means that most of the technology needed for cold atom quantum computing has already been validated by customers. This gives us a significant advantage in the race to deliver a quantum computer that can address some of the most complex computing challenges we face today.”

According to Bob Sorensen, Chief Analyst for Quantum Computing, Hyperion Research, “ColdQuanta’s use of cold atom quantum computing opens up a range of new possibilities in discrete qubit performance, dynamically reconfigurable interconnect schemes, and perhaps most important, the potential to scale to large numbers of qubits per individual quantum processor. ColdQuanta is committed to a long-term road map that leads to a full-stack quantum computing solution and is already taking the right steps to ensure that their unique hardware can be readily accessible and programmable to a wide base of potential users. The next key step will be the demonstration of cold atoms to address a compelling and real-world use-case that can help drive this technology to the forefront of the currently crowded field of quantum computing hardware options.”

Computing with Cold Atoms

The ColdQuanta quantum computer is built around a unique glass cell that maintains a vacuum and houses a checkerboard-like array of cesium atoms, each of which acts as an individual qubit. Lasers and other photonic technologies cool the atoms to ten millionths of a degree above absolute zero, then initialize the qubits and orchestrate computations. The final state of the qubit array is photographed and analyzed.

Over the past several years, early-stage quantum computers have employed different approaches with superconducting circuits, trapped ions, photons, and other materials used as qubits. While there are pros and cons to each method, ColdQuanta’s approach has significant advantages over other implementations:

- The qubits are all atoms of the same element and are identical, so there are no manufacturing defects.
- The qubits are cooled to ten millionths of a degree above absolute zero, which is much colder than other technologies. Quantum effects typically operate better and longer at colder temperature. This combination allows for longer and more complex computations.
- Two dimensional cold atom arrays scale from tens to thousands of qubits, enabling bigger computations to address real-world problems. The DARPA ONISQ program, awarded to ColdQuanta, calls for a demonstration of a system with over 1000 qubits running a Department of Defense application.
- Gates can entangle distant qubits, allowing larger logical circuits on the same qubit array. This should allow more computational work to be accomplished per unit time with more advanced qubit connectivity.
- Advanced vacuum cell technology does away with the need for cryogenics.
- The computational platform is dynamically reconfigurable, which shortens the development cycle and leads to quicker system improvement.

### Xanadu awarded DARPA grant to develop novel quantum compiler for NISQ-based machines in July 2021

Xanadu, a full-stack quantum computing company developing quantum hardware and software solutions, has been awarded a Defense Advanced Research Projects Agency (DARPA) grant. The grant will enable Xanadu to develop a unique general-purpose “circuit-cutting” compiler which can automatically break down a circuit into a multi-circuit hybrid model—leveraging both classical and quantum computing—which will be ideal for near-term quantum computers.

“With PennyLane, these complex hybrid models can be run for the user seamlessly on the quantum hardware or simulators of their choice.” said Nathan Killoran, who heads up Xanadu’s Quantum Software & Algorithms team. “Using these tools, we plan to run quantum algorithms which would natively require 100+ qubits using quantum hardware and simulators containing only 10-30 qubits.”

Xanadu created one of the world’s first open-source software platforms for quantum computers, known as PennyLane (__https://pennylane.ai__). PennyLane allows users to connect quantum computing hardware and software from key hardware vendors, including Xanadu, IBM, Google, IonQ, Rigetti, and Microsoft.

Xanadu will leverage the expertise of its in-house team of dedicated quantum programmers and scientists, whose work in quantum computing is globally recognized, to carry out the DARPA-funded research project over a twenty-four-month period. “If successful, this project will have a wide impact on the entire community working with present-day quantum computers,” said Christian Weedbrook, the company’s founder and CEO. “It will allow everyone to run larger-scale quantum computations than they currently can—without needing access to more powerful quantum processors.” This is Xanadu’s second grant from DARPA, after successfully completing an initial grant on quantum machine learning using PennyLane.

### USRA-Rigetti-NASA team advances to DARPA ONISQ Phase 2

The Defense Advanced Research Projects Agency recently funded the second phase of a quantum computing project that aims to expand the utility of emerging technology, according to one of the lead researchers on the project.

The second phase of the Georgia Tech Research Institute-led project brought its funding total to $9.2 million for the scientists to run additional experiments on a quantum computing system configured to potentially string together more computing units than ever.

In the next two and a half years, the team will continue to test and evaluate these solvers using operational metrics, leveraging internal resources as well as the large amount of literature and products developed by the scientific and private sector community on benchmarking and detecting quantum advantage. The collaboration has currently produced more than ten scientific papers, published or presented at international conferences or currently under review. The ONISQ program also is an important part of the tight collaboration of USRA with NASA under the NASA Academics Mission Service

### References and Resources also include:

https://blogs.scientificamerican.com/observations/the-problem-with-quantum-computers/

https://www.afcea.org/content/darpas-quantum-quest-may-leapfrog-modern-computers