Home / Technology / AI & IT / DARPA ONISQ to exploit quantum computers for improving artificial intelligence (AI), enhancing distributed sensing and improving military Logistics

DARPA ONISQ to exploit quantum computers for improving artificial intelligence (AI), enhancing distributed sensing and improving military Logistics

Quantum technologies offer ultra-secure communications, sensors of unprecedented precision, and computers that are exponentially more powerful than any supercomputer for a given task. Richard Feynman’s original vision for quantum computing sprang from the insight that there are hard problems, e.g. in quantum physics, quantum chemistry, and materials, that are nearly intractable using classical computation platforms but that might be successfully modeled using a universal quantum computer. A universal fault-tolerant quantum computer that can solve efficiently problems such as integer factorization and unstructured database search requires millions of qubits with low error rates and long coherence times.

 

The field of Quantum Computing (QC) has seen considerable progress in recent years, both in the number of qubits that can be physically realized and in the formulation of new quantum search and optimization algorithms. However, numerous challenges remain to be solved to usefully employ QC to solve real world problems. These include challenges of scale, environmental interactions, input/output, qubit connectivity, quantum memory (or lack thereof), quantum state preparation and readout, and numerous other practical and architectural challenges associated with interfacing to the classical world.

 

While the experimental advancement towards realizing such devices will potentially take decades of research, noisy intermediate-scale quantum (NISQ) computers already exist. These computers are composed of hundreds of noisy qubits, i.e. qubits that are not error-corrected, and therefore perform imperfect operations in a limited coherence time.

 

John Preskill, the theoretical physicist at Caltech, coined the term NISQ for a keynote speech he delivered at Quantum Computing for Business on 5 December 2017. “We are now entering a pivotal new era in quantum technology, wrote Preskill, adding, “for this talk, I needed a name to describe this impending new era, so I made up a word: NISQ. This stands for Noisy Intermediate Scale Quantum.”

 

“Here ‘intermediate scale’ refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred. Fifty qubits is a significant milestone, because that’s beyond what can be simulated by brute force using the most powerful existing digital supercomputers.”

 

“Noisy emphasizes that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.“We shouldn’t expect NISQ to change the world by itself; instead it should be regarded as a step toward more powerful quantum technologies we’ll develop in the future.“I do think that quantum computers will have transformative effects on society eventually, but these may still be decades away. We’re just not sure how long it’s going to take.”

 

In the search for quantum advantage with these devices, algorithms have been proposed for applications in various disciplines spanning physics, machine learning, quantum chemistry and combinatorial optimization. DARPA is looking to exploit quantum information processing before fully fault-tolerant quantum computers exist. Fault-tolerant means that if one part of the computer stops working properly, it can still continue to function without going completely haywire. On February 27, 2019 DARPA announced its Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program.

Computing  with Noisy Intermediate Scale Quantum  computers (NISQ)

DARPA seeks to challenge the community to address the fundamental limits of quantum computing and to identify where quantum computing can relevantly address hard science and technology problems, thus realizing Feynman’s original vision. Both near-term (next few years) and longer-term (next few decades) capabilities and their limitations are of interest. The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is seeking information on new capabilities that could be enabled by current and next-generation quantum computers for understanding complex physical systems, improving artificial intelligence (AI) and machine learning (ML), and enhancing distributed sensing.

 

The principal objective of the ONISQ program is to demonstrate quantitative advantage of Quantum Information Processing (QIP) over the best classical methods for solving combinatorial optimization problems using Noisy Intermediate-Scale Quantum (NISQ) devices. In addition, the program will identify families of problem instances in combinatorial optimization where QIP is likely to have the biggest impact.

 

Also of interest to this RFI, is the possibility of adapting to classical computers some of the techniques that are being developed for handling quantum data (both at the algorithm level as well as protocols for loading, storing and transferring data). These “quantum inspired” approaches may provide novel capabilities in terms of efficiency and speed.

 

Los Alamos National Laboratory is developing a method to invent and optimize algorithms that perform useful tasks on noisy quantum computers. The main idea is to reduce the number of gates in an attempt to finish execution before decoherence and other sources of errors have a chance to unacceptably reduce the likelihood of success. They use machine learning to translate, or compile, a quantum circuit into an optimally short equivalent that is specific to a particular quantum computer. Until recently, we have employed machine-learning methods on classical computers to search for shortened versions of quantum programs. Now, in a recent breakthrough, we have devised an approach that uses currently available quantum computers to compile their own quantum algorithms. That will avoid the massive computational overhead required to simulate quantum dynamics on classical computers.

 

Because this approach yields shorter algorithms than the state of the art, they consequently reduce the effects of noise. This machine-learning approach can also compensate for errors in a manner specific to the algorithm and hardware platform. It might find, for instance, that one qubit is less noisy than another, so the algorithm preferentially uses better qubits. In that situation, machine learning creates a general algorithm to compute the assigned task on that computer using the fewest computational resources and the fewest logic gates. Thus optimized, the algorithm can run longer.

 

This method, which has worked in a limited setting on quantum computers now available to the public on the cloud, also takes advantage of quantum computers’ superior ability to scale up algorithms for large problems on the larger quantum computers envisioned for the future.

 

One other approach that has received some attention is the possibility of new capabilities that may be unleashed by the combination of limited quantum computers with either existing quantum sensors or classical computing resources. Such combination might bypass the problems of state preparation and interfacing to classical memory. In this case it has been posited that by aggregating quantum data from distributed sensors, a quantum computer may improve the performance beyond what could be classically achievable.

One possible solution, the researchers suggest, will be to divide up a problem between classical and quantum computers. The classical computers will solve some pieces of the puzzle, and the quantum processors will handle others. Herold describes a theoretical scenario in which a cloud computing resource decides how to divvy up a problem between classical and quantum computers.

“You might have these classical heuristics running and have cloud access to some quantum hardware and then when the classical heuristics struggle, maybe that quantum hardware is utilized for that problem,” Herold posits. “Or, it may be possible to break up problems into chunks and then send some chunks to the quantum processor—the really hard problems—and then put them back together in classical processing afterwards. There are a lot of ways that it could look, and we’re going to be figuring out how best to do that in the next few years.”

 

 

NISQ to  attack combinatorial optimization problems

An issue of particular interest is the potential impact of QC on “second wave” AI/ML optimization. ML has shown significant value in a broad range of real world problems, but the training time (due to the size and variety of the data needed for learning) and also network design space (due to a paucity of detailed analysis and theory for ML/deep learning (DL) systems) are large. It has been suggested that QC could significantly decrease training time of currently standard ML approaches by providing quantum speedup on optimization subroutines.

 

Tatjana Curcic, program manager within DARPA’s Defense Sciences Office, agrees that combinatorial optimization problems are widespread. “Optimization is everywhere. It’s in electronics. It’s in logistics. It’s in how manufacturing works, how you optimize the work process in a manufacturing plant. It’s everywhere,” she says. She also cautions, however, that as a basic research program, ONISQ is not attempting to solve any particular problem. Instead, the goal is to conduct foundational research that scientists can then build upon.

 

According to DARPA, “Solving combinatorial optimization problems – with their mindboggling number of potential combinations – is of significant interest to the military. “One potential application is enhancing the military’s complex worldwide logistics system, which includes scheduling, routing, and supply chain management in austere locations that lack the infrastructure on which commercial logistics companies depend. ONISQ solutions could also impact machine-learning, coding theory, electronic fabrication, and protein-folding.”

 

Planning and scheduling also are combinatorial optimization problems. “Let’s say, given a group of nurses in a hospital, how do I meet everyone’s constraints and build a valid schedule where I can cover all of my shifts and deal with everyone who has been on vacation or whatnot?” Herold offers. Herold is part of a GTRI team working with the U.S. Defense Department’s Advanced Research Projects Agency (DARPA) on a new program known as ONISQ, for Optimization with Noisy Intermediate-Scale Quantum devices.  He adds that combinatorial optimization problems quickly become too complex for humans. “If you look at really small examples, it feels like doing a puzzle. They’re fun for your brain when they’re small, but rapidly you get to these big problems that are intractable for people to solve on their own.

 

Showing that quantum systems can perform better than classical computers for combinatorial optimization problems is a serious challenge, Herold says. “That’s a really tall order. We’re starting in this place where we’ve shown control over two and three or four ions, and to meet their metrics and to have enough resources to solve interesting, real-world problems, we need to extend our hardware to have 10 or 20 ions in a year and offer 50 ions the year after that,” Herold adds. “That’s a real engineering challenge for us. It’s not hard to trap those ions, but to actually have control over them and to make use of all of them is really difficult.”

 

And the competition is stiff. Governments, including the United States, China, Russia, North Korea and most European nations, are racing to gain a quantum computing advantage. Industry also is interested. In the United States alone, Google, IBM, Intel, Microsoft and a host of smaller companies are investing in quantum computing research. “There are tens of hardware computing companies from major corporations to startups that are developing quantum computing hardware and are also racing to really show a useful quantum advantage,” Herold states. “It’s a real sprint.”

 

DARPA DSO’s RFI responses may address one or multiple  challenge areas

Challenge 1: Fundamental limits of quantum computing. In order to establish such limits, respondents should address some of the following relevant questions:

o What are the near term wins in mapping QC to hard science modeling problems? We impose no constraints on what is meant by quantum computing; e.g. this could be a collection of physical or logical qubits, a quantum annealing machine, a quantum computational liquid, or some other quantum emulation platform that can serve as a proxy for the system to be modeled.

o Address the questions of scale. How many degrees of freedom in the problem of interest must be mapped to the QC platform to realistically model the system? At what scale do known classical computation platforms and algorithms become inadequate, and what are the potential gains brought by QC?

o How should the problem be framed; i.e. what are the questions to be addressed in modeling the physical system with a QC proxy system, and how should the quantum states be initialized and read out? Are there any new algorithms to usefully map the real-world quantum system to the proxy system?

o What are the known fundamental limitations to QC and scaling, including limits due to decoherence, degeneracy, environmental interactions, input-output limitations, and limited connectivity in the qubit-to-qubit interaction Hamiltonian? How will coherence times scale with the size of the QC system? Discuss error correction techniques and their scaling. How will errors scale with the size of the system? How valid are assumptions of uncorrelated noise?

o What is the real speedup for known QC algorithms (e.g. HHL, Grover), taking into account maximum realizable size N of the system, quantum state preparation and readout, limited connectivity in the Hamiltonian, and interfacing to classical memory and the classical world?

 

Challenge 2: Hybrid approaches to machine learning.

We are interested in approaches that dramatically improve the total time taken to construct a high-performing ML/DL solution by leveraging a hybrid quantum/classical computing approach. For example, a hybrid approach may incorporate a small scale quantum computer to efficiently implement specific subroutines that require limited resources in a ML/DL task that is being handled by a classical computer. The challenge here is to identify the best approaches for achieving significant speed up as compared to the capabilities of the best known algorithms that run solely on classical computers. Some of the relevant questions are:

o What approaches can be used to efficiently implement ML/DL tasks using a hybrid quantum/classical system using near term and future QC devices? Are there specific tasks for which such approaches are more beneficial than others?

o How does the speedup depend on the size of the available quantum resources (e.g. number of qubits N)?

o What are the challenges in implementing this idea? For example, what issues have to be dealt with in order to interface quantum and classical resources? Can we efficiently transfer data between the classical and quantum processors in order to see any gains in performance?

o Is there a need to develop additional auxiliary technology to implement such approaches?

 

Challenge 3: Interfacing quantum sensors with quantum computing resources. Some of the relevant questions are:

o What new capabilities can be gained through the combination of a quantum computer and distributed quantum sensors? How large does the quantum computer need to be and how well does it need to operate (e.g. how large of a two-qubit gate error can the system tolerate)? How many distributed sensors are needed to see a benefit and what level of performance do they need to have (e.g. operate at the standard quantum limit or near the Heisenberg limit, etc.)?

o What quantum computer platform (e.g. trapped ion qubits, superconducting qubits, etc.) and sensors (atomic clocks, magnetometer, etc.) could potentially be leveraged in this approach?

o What are the potential roadblocks to making a demonstration of this approach possible?

o Are there any auxiliary components that need to be developed prior to making a demonstration of this approach?

o Are there non-performance capabilities to be gained from entangled sensors like security or trust?

o Are there important implications of the location of the sensors (e.g. relativistic effects) or the topology of the devices to realize the potential new capabilities?

 

Challenge 4: QC inspired algorithms and processes that are applicable to classical computers.

o What systematic processes can be learned from the QC-inspired algorithms to date? Are there recurring themes and structures that have arisen in these new solutions?

o Are there approaches to identify classical algorithm improvement when it has been shown to have a quantum supremacy approach? In other words, can we predict these kinds of inspirations?

o As we learn about interfacing data and computation from challenges 1, 2, and 3, do we learn better classical architectures for mixing data input/output, memory, and computing together?

 

DARPA Awards

The four-year program officially kicked off in March  2020 and is divided into two phases. It includes two kinds of research—hardware and theoretical. Early this year, DARPA awarded three contracts to teams led by the University of Tennessee, Clemson University and Lehigh University to explore the theoretical possibilities of hybrid computers working combinatorial optimization problems. The agency also awarded contracts to teams led by GTRI, Universities Space Research Association (USRA), Presidents & Fellows of Harvard College and ColdQuanta Incorporated to develop quantum-classical computing hardware. Each team is pursuing different potential solutions.

 

In Technical Area 1, the following performers were selected to demonstrate a hybrid quantum/classical optimization algorithm in a quantum device to solve a specific combinatorial optimization problem:

Georgia Tech Applied Research Corporation
Universities Space Research Association
Presidents & Fellows of Harvard College
ColdQuanta, Inc.

The GTRI team, which includes the National Institutes for Standards and Technology’s Ion Storage Group, is the only team specializing in trapped ion research. “Our project is called Optimization with Trapped Ion Qubits, which has a snappy acronym, OPTIQ,” Herold states.

A couple of years ago, GTRI demonstrated universal control of as many as four qubits and followed that with a demonstration of a small quantum algorithm that Herold describes as a “toy algorithm.” The DARPA program is a natural extension of that previous research. “The goal there is to build out the hardware to the point we have enough ions and control over them that we can actually solve problems which are interesting in the real world and aren’t just toys,” Herold says.

 

$2.1M DARPA grant puts Lehigh Univ. optimization experts at vanguard of quantum computing

Lehigh University will soon be on the front lines of the quantum computing revolution. With support from a recently awarded $2,128,658 research grant from the Defense Advanced Research Projects Agency (DARPA), an international group led by industrial and systems engineering (ISE) faculty members Tamás Terlaky, Luis Zuluaga, and Boris Defourny will work on optimization algorithms in quantum computing.

 

“We want to explore the power of existing quantum computers, and those that are predicted to exist in the future,” says Terlaky, who is a member of the Quantum Computing and Optimization Lab (QCOL) in the P.C. Rossin College of Engineering and Applied Science. The lab was established in 2019 to accelerate the development of quantum computing optimization methodology, and associated faculty launched the university’s first quantum computing course this spring. “We’ll be looking at combinatorial optimization problems for quantum computing with the goal that, in four years, we’ll be able to demonstrate that quantum computers are surpassing the capabilities of classical computers, at least on some problems.”

 

Terlaky says their work is related to the theory of quantum supremacy, which, very broadly, states that quantum computers will be exponentially better than current silicon computers at quickly solving problems that are unsolvable today. Problems related to fields as diverse as finance, security, genetics, transportation, manufacturing, and machine learning, and that model practical, binary questions such as whether to purchase or not purchase, build or not build, etc. There is a long way to go to achieve that end. Current quantum computers are about where silicone based computer chips were in the 1950s, says Terlaky, who is also affiliated with Lehigh’s Institute for Data, Intelligent Systems, and Computation (I-DISC).

 

“In the 50s, we had gym-size computers with very little memory, and very little processing power,” he says. “A lot of programming was written in assembly language, getting the machine the codes, and specifying every gate and route for the information. At this point with quantum computers, the programming language is very similar. It’s not a high-level language where you can write a complicated code easily. So all this software has to develop along with the upcoming hardware.” Until recently, he says, most of the work in this area was being done by theoretical physicists, electrical engineers, computer engineers, and theoretical computer scientists. But the theory of quantum supremacy is essentially one big optimization problem.

 

“And we are the optimizers,” says Terlaky. “Very few people in the optimization community have looked at these problems so far. We are definitely the first sizable group to do so.” Additional researchers involved in the DARPA project include Giacomo Nannicini (IBM T.J. Watson Research Center), Stefan Wild (NAISE, Evanston, IL, and Argonne National Lab), Alain Sarlette (INRIA, Paris, France), Xiu Yang (ISE, Lehigh University), and Monique Laurent (Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands). Terlaky says the grant reflects the team’s standing as one of the best in the world at what they do. And he says the collaborative, global reach of the team reflects his own professional ethos.

 

Xanadu

Xanadu, a leading quantum computing company, has been awarded a grant from DARPA to develop a unique general-purpose compiler capable of breaking down circuits into a hybrid model, leveraging both classical and quantum computing. This compiler, part of Xanadu’s PennyLane platform, aims to enable complex hybrid models to run seamlessly on quantum hardware or simulators, allowing quantum algorithms requiring 100+ qubits to be executed on hardware with only 10-30 qubits.

Nathan Killoran, head of Xanadu’s Quantum Software & Algorithms team, highlighted the significance of PennyLane in facilitating the execution of hybrid quantum-classical models. By leveraging PennyLane’s capabilities, Xanadu plans to conduct quantum algorithms beyond the native capacity of available hardware.

Xanadu’s open-source software platform, PennyLane, serves as a bridge between quantum computing hardware and software from various vendors, including Xanadu, IBM, Google, IonQ, Rigetti, and Microsoft. This platform enables users to seamlessly connect with different quantum computing ecosystems and leverage their capabilities.

Over a twenty-four-month period, Xanadu will utilize its team of dedicated quantum programmers and scientists to conduct the DARPA-funded research project. Christian Weedbrook, Xanadu’s founder and CEO, emphasized the potential impact of the project on the quantum computing community. If successful, the project will allow users to perform larger-scale quantum computations without the need for access to more powerful quantum processors.

This grant marks Xanadu’s second collaboration with DARPA, following the successful completion of an initial grant focused on quantum machine learning using PennyLane. With its continued efforts in advancing quantum computing technology, Xanadu remains at the forefront of innovation in the field.

 

Universities Space Research Association to Lead a DARPA Project on Quantum Computing, reported in March 2020

Universities Space Research Association (USRA) today announced that DARPA has awarded the organization and its partners Rigetti Computing and the NASA Quantum Artificial Intelligence Laboratory (QuAIL) to work as a team to advance the state of art in quantum optimization. USRA, as the prime contractor of the award, will manage the collaboration.

 

The collaboration will focus on developing a superconducting quantum processor, hardware-aware software and custom algorithms that take direct advantage of the hardware advances to solve scheduling and asset allocation problems. In addition, the team will design methods for benchmarking the hardware against classical computers to determine quantum advantage.

 

USRA Senior Vice President Bernie Seery noted, “This is a very exciting public-private partnership for the development of forefront quantum computing technology and the algorithms that will be used to address pressing, strategically significant challenges. We are delighted to receive this award and look forward to working with our partner institutions to deliver value to DARPA.”

 

In particular, the work will target scheduling problems whose complexity goes beyond what has been done so far with the quantum approximate optimization algorithm (QAOA). USRA’s Research Institute for Advanced Computer Science (RIACS) has been working on quantum algorithms for planning and scheduling for NASA QuAIL since 2012. “The innovations on quantum gates performed by Rigetti coupled perfectly with the recent research ideas at QuAIL, enabling an unprecedented hardware-theory co-design opportunity” explains Dr. Davide Venturelli, USRA Associate Director for Quantum Computing and project PI for USRA. Understanding how to use quantum computers for scheduling applications could have important implications for national security such as real time strategic asset deployment, as well as commercial applications including global supply chain management, network optimizations or vehicle routing.

 

Rigetti Computing Wins $8.6 million DARPA Grant to Demonstrate Practical Quantum Computing

Rigetti Computing has secured an $8.6M contract to help the Defense Advanced Research Projects Agency support a quantum technology research and development effort. The company said Thursday it will work under a collaboration between DARPA, NASA Quantum Artificial Intelligence Laboratory and the Universities Space Research Association to create a quantum-powered full-stack computing system.

 

The collaboration will focus on developing a superconducting quantum processor, hardware-aware software, and custom algorithms based on real-world scenarios. The work will leverage Rigetti’s Fab-1—the only dedicated quantum integrated circuit foundry in the U.S.—to manufacture chips that scale beyond 100 qubits. In addition, the NASA-USRA team will design methods for benchmarking the hardware against classical computers to determine quantum advantage. The effort aims to help the national security community address scheduling complexities in supply chain management, network activities and other strategic operations.

Argonne National Laboratory and the University of Chicago

Two recent awards granted by DARPA to Argonne National Laboratory and the University of Chicago signal a significant stride in the quest to unlock the capabilities of quantum computing. These awards, part of the ONISQ program, underscore the importance of hybrid quantum-classical approaches in addressing real-world challenges.

The focus of both projects lies in the synergy between classical and quantum computing methods to achieve practical solutions. This recognition reflects the understanding that current quantum devices, while promising, require collaboration with classical computation for optimal performance.

The first project, spearheaded by Argonne and ColdQuanta, aims to develop a scalable quantum platform based on cold atoms. By leveraging ColdQuanta’s expertise in manipulating cold atoms, the project seeks to demonstrate quantum advantage in areas such as resource allocation, logistics, and image recognition.

The second project, led by Ilya Safro of Clemson University, concentrates on creating a suite of hybrid algorithms tailored for noisy quantum processors. This endeavor delves into problems relevant to national security and beyond, showcasing the potential for practical applications and highlighting the pivotal role of algorithm development in harnessing the power of hybrid quantum-classical systems.

These projects are poised to bridge the gap between theoretical concepts and practical implementations, showcasing the tangible benefits of hybrid quantum computing. By fostering collaboration among diverse institutions and experts, they aim to drive innovation and overcome challenges associated with current quantum hardware, including noise and error correction.

In essence, these DARPA awards signify a growing momentum in the exploration of hybrid quantum-classical approaches as a viable pathway to unlocking the full potential of quantum computing. By focusing on specific applications and addressing hardware limitations, these projects are poised to usher in practical advancements in the field, paving the way for a new era of computing capabilities.

 

ColdQuanta  Cold Atom Quantum Computer Technology

ColdQuanta, a quantum technology company, was selected by DARPA in April 2021 to develop a scalable cold-atom-based quantum computing platform capable of demonstrating quantum advantage on real-world problems. Led by Chief Scientist Mark Saffman, the project aims to leverage ColdQuanta’s Quantum Core platform to advance quantum computing hardware and software. Additionally, ColdQuanta announced cloud access to a quantum matter system in October, enabling users to manipulate and experiment with ultracold matter.

The leadership team at ColdQuanta, including CEO Bo Ewald and Founder/CTO Dana Anderson, has been instrumental in building the emerging quantum industry. Ewald’s previous experience includes roles at D-Wave International and involvement in quantum standardization efforts, while Anderson serves on the Quantum Economic Development Consortium Steering Committee.

ColdQuanta’s approach to quantum computing revolves around a unique glass cell housing an array of cesium atoms, each serving as an individual qubit. By cooling the atoms to extremely low temperatures and utilizing lasers for manipulation, the platform enables computations with unparalleled precision and scalability. This approach offers several advantages over other quantum computing methods, including identical qubits without manufacturing defects, superior cooling for enhanced quantum effects, and scalability to thousands of qubits.

The computational platform developed by ColdQuanta is dynamically reconfigurable and does not require cryogenics, facilitating quicker system improvement and shorter development cycles. Moreover, the platform’s ability to entangle distant qubits allows for larger logical circuits and more advanced connectivity, enabling complex computations to address real-world problems effectively.

ColdQuanta’s participation in the DARPA ONISQ program aims to demonstrate a system with over 1000 qubits running Department of Defense applications, highlighting the company’s commitment to advancing quantum computing technology and addressing pressing challenges in the field.

 

Xanadu awarded DARPA grant to develop novel quantum compiler for NISQ-based machines in July 2021

Xanadu, a full-stack quantum computing company developing quantum hardware and software solutions, has been awarded a Defense Advanced Research Projects Agency (DARPA) grant. The grant will enable Xanadu to develop a unique general-purpose “circuit-cutting” compiler which can automatically break down a circuit into a multi-circuit hybrid model—leveraging both classical and quantum computing—which will be ideal for near-term quantum computers.

 

“With PennyLane, these complex hybrid models can be run for the user seamlessly on the quantum hardware or simulators of their choice.” said Nathan Killoran, who heads up Xanadu’s Quantum Software & Algorithms team. “Using these tools, we plan to run quantum algorithms which would natively require 100+ qubits using quantum hardware and simulators containing only 10-30 qubits.”

Xanadu created one of the world’s first open-source software platforms for quantum computers, known as PennyLane (https://pennylane.ai). PennyLane allows users to connect quantum computing hardware and software from key hardware vendors, including Xanadu, IBM, Google, IonQ, Rigetti, and Microsoft.

 

Xanadu will leverage the expertise of its in-house team of dedicated quantum programmers and scientists, whose work in quantum computing is globally recognized, to carry out the DARPA-funded research project over a twenty-four-month period. “If successful, this project will have a wide impact on the entire community working with present-day quantum computers,” said Christian Weedbrook, the company’s founder and CEO. “It will allow everyone to run larger-scale quantum computations than they currently can—without needing access to more powerful quantum processors.” This is Xanadu’s second grant from DARPA, after successfully completing an initial grant on quantum machine learning using PennyLane.

 

USRA-Rigetti-NASA team advances to DARPA ONISQ Phase 2

The Defense Advanced Research Projects Agency recently funded the second phase of a quantum computing project that aims to expand the utility of emerging technology, according to one of the lead researchers on the project.

The second phase of the Georgia Tech Research Institute-led project brought its funding total to $9.2 million for the scientists to run additional experiments on a quantum computing system configured to potentially string together more computing units than ever.

In the next two and a half years, the team will continue to test and evaluate these solvers using operational metrics, leveraging internal resources as well as the large amount of literature and products developed by the scientific and private sector community on benchmarking and detecting quantum advantage. The collaboration has currently produced more than ten scientific papers, published or presented at international conferences or currently under review. The ONISQ program also is an important part of the tight collaboration of USRA with NASA under the NASA Academics Mission Service

 

References and Resources also include:

https://www.hpcwire.com/off-the-wire/argonne-receives-two-awards-from-darpa-for-quantum-information-science/

https://blogs.scientificamerican.com/observations/the-problem-with-quantum-computers/

https://www.afcea.org/content/darpas-quantum-quest-may-leapfrog-modern-computers

https://www.newsbreak.com/news/2316105795374/xanadu-awarded-darpa-grant-to-develop-novel-quantum-compiler-for-nisq-based-machines

 

About Rajesh Uppal

Check Also

Digitized Modem Architecture with Digital IF: Enabling Software-Defined Satellites and Earth Stations

In the fast-evolving realm of satellite communication, innovation is key to meeting the ever-increasing demands …

error: Content is protected !!