Home / Technology / AI & IT / Quantum Computers for High-Performance Computing (HPC) data centers

Quantum Computers for High-Performance Computing (HPC) data centers

High Performance Computing (HPC) refers to the practice of aggregating computing power in a way that delivers much higher horsepower than traditional computers and servers. HPC, or supercomputing, is like everyday computing, only more powerful. It is a way of processing huge volumes of data at very high speeds using multiple computers and storage devices as a cohesive fabric. HPC makes it possible to explore and find answers to some of the world’s biggest problems in science, engineering, and business.

 

Today, HPC is used to solve complex, performance-intensive problems—and organizations are increasingly moving HPC workloads to the cloud. HPC in the cloud is changing the economics of product development and research because it requires fewer prototypes, accelerates testing, and decreases time to market.

 

Some workloads, such as DNA sequencing, are simply too immense for any single computer to process. HPC or supercomputing environments address these large and complex challenges with individual nodes (computers) working together in a cluster (connected group) to perform massive amounts of computing in a short period of time. Creating and removing these clusters is often automated in the cloud to reduce costs. HPC can be run on many kinds of workloads, but the two most common are embarrassingly parallel workloads and tightly coupled workloads.

Embarrassingly parallel workloads

Are computational problems divided into small, simple, and independent tasks that can be run at the same time, often with little or no communication between them. For example, a company might submit 100 million credit card records to individual processor cores in a cluster of nodes. Processing one credit card record is a small task, and when 100 million records are spread across the cluster, those small tasks can be performed at the same time (in parallel) at astonishing speeds. Common use cases include risk simulations, molecular modeling, contextual search, and logistics simulations.

Tightly coupled workloads

Typically take a large shared workload and break it into smaller tasks that communicate continuously. In other words, the different nodes in the cluster communicate with one another as they perform their processing. Common use cases include computational fluid dynamics, weather forecast modeling, material simulations, automobile collision emulations, geospatial simulations, and traffic management.

 

High-performance computing (HPC) systems define the pinnacle of modern computing by drawing on massively parallel processing. This leading paradigm for HPC often relies on specialized accelerators and highly tuned networks to optimize data movement and application performance, whereby many computational nodes are connected by high-bandwidth networks to support shared information processing tasks. Existing computational nodes also support highly concurrent execution with multithreaded processing, and technology trends indicate that future node designs will integrate heterogeneous processing paradigms that include conventional central processing units (CPUs), graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and other specialized processors. The components of these future computational nodes must be tightly integrated to balance data movement with processing power and workload in order to optimize overall system performance.

 

Quantum Computers for High-performance computing (HPC)

Quantum computers by harnessing quantum super-positioning to represent multiple states simultaneously, promise exponential leaps in performance over today’s traditional computers. A qubit is the unit of quantum information that is equivalent to the binary bit in classical computing. What makes qubits so interesting is that the 1s and 0s can be superimposed, which means that it can perform many calculations at the same time in parallel. So unlike a binary computer, where a bit is either 0 or 1, a quantum computer has three basic states—0, 1, and 0 or 1. In greatly oversimplified terms, that allows operations can be carried out using different values at the same time. Coupled with well-constructed algorithms, quantum computers will be at least as powerful as today’s supercomputers, and in the future they are expected to be orders of magnitude more. Quantum computers shall bring power of massive parallel computing i.e. equivalent of supercomputer to a single chip.

 

By comparison, quantum computers (QCs) represent a young yet remarkable advance in the science and technology of computation that are often cited as rivals or successors to state-of-the-art conventional high-performance computing (HPC) systems.  These quantum physical systems present the unique features of quantum coherence and quantum entanglement that permit quantum computing to reduce exponentially the computational time and memory needed to solve many problems from chemistry, materials science, finance, and cryptanalysis among other application domains. The advantage afforded to quantum computing is therefore aptly named the “quantum computational advantage,” and there is now a fervent effort to realize quantum computing systems that demonstrate this advantage. Notably, recent efforts have focused on besting the world’s leading HPC systems to great effect.

 

There is a wide range of promising application use cases for QC, and many of these overlap with the existing uses of HPC. The utilizations include the simulation of high-dimensional physical models, the design, verification, and validation of complex logical systems, and inference, pattern matching, and search over large datasets. Underlying these use cases are algorithms that take advantage of the logic from both conventional computing and quantum computing.

 

Challenges

The migration of QCs toward integrated HPC will require many advances in micro- and macro-architecture that address concerns in the difference in infrastructure and performance. For example, quantum processing units (QPUs) represent an early vision of components that may integrate as accelerators for computational nodes. However, existing QC prototypes are based on loosely integrated client–server interactions that lack the sophistication or technological maturity to be used as accelerators. In addition, communication between multiple QPUs lacks the networking protocols needed to support concurrent processing models. This includes both conventional as well as quantum networking. Although existing networking architecture may be leveraged for this purpose, there are outstanding questions about the balance between performance and workload that drive the development of prototype systems.

 

Alongside these system-level concerns are the on-going needs to improve device characteristics, including register size, gate fidelities, coherence times, and others, as well as the overall scaling of these resources to limit that support quantum computational advantage. The selection of target applications and application data will prove powerful in identifying the system resources needed to realize such goals and provide even better expectations for near-term demonstration of quantum computing.

 

The software stack for enabling QC integration with HPC will require extensibility and modularity. Existing variability in quantum technologies (superconducting, trapped-ion, etc.) from competing hardware vendors (Google, IBM, Rigetti, Honeywell, etc.) requires customization at all levels of abstraction.

 

High-level programmability of the CPU-QPU hybrid system needs performant language approaches that permit creation of new quantum algorithmic primitives. Software protocols for quantum acceleration in HPC environments will also need to be compatible with existing HPC applications, tools, compilers, and parallel runtimes. This implies the need for hybrid software approaches to provide system-level languages, like C++, with appropriate bindings to higher level application languages like Python or Julia. Providing an infrastructure based on C++ will provide the performance necessary to integrate future QPUs and enable in-sequence instruction execution for utilities and methods that require fast-feedback. A C++-like language is multiparadigm (object-oriented, functional, etc.) and enables integration with existing HPC workflows, write Travis S. Humble, Alexander McCaskey from Oak Ridge National Laboratory, Oak Ridge, TN, USA (and others)

 

Atos and IQM study finds 76% of global HPC data centers to use quantum computing by 2023

Atos and IQM announcd in Nov 2021, the findings from the first global IDC study on the current status and future of quantum computing in high performance computing (HPC). Commissioned by IQM and Atos, the study reveals that 76% of HPC data centers worldwide plan to use quantum computing by 2023, and that 71% plan to move to on-premises quantum computing by 2026.

 

One of the key findings from the study is that it is becoming increasingly difficult for users to get the optimal performance out of high-performance computing while ensuring both security and resilience. 110 key decision-makers from high-performance computing (HPC) centers worldwide were surveyed. For the first time, the results provide concrete insights into a technology area that will change Europe and the world significantly.

 

Quantum computing is the number one technology in Europe and among the top three technologies of the top 500 HPC data centers worldwide. 76 percent of HPC centers are already using quantum computing or plan to use them in the next two years. The expected benefits of the introduction of quantum computers are clear for HPC data centers: the survey shows that these are: tackling new problems such as supply chain logistics or challenges related climate change (45 percent) and solving existing problems faster (38 percent) – while at the same time reducing computing costs (42 percent).

 

Increasing complexity as an opportunity

Cloud is a key part of this HPC architecture, mixing standard elements with custom-developed infrastructure components. Based on the survey responses, hybrid and cloud deployments are especially important in the EMEA region. 50 percent state that a hybrid HPC architecture is top priority, (North America 46 percent; APAC 38 percent). Yet, there is a lack of knowledge about how quantum computing will work alongside a classical HPC infrastructure. Therefore, outsourcing operations and maintenance with partners will continue with the increase in quantum computing.

 

A market in transition

Developing and testing real-world use cases is critical to the future success of quantum computing. The four most important use cases for quantum computing are currently linked to the analysis of huge amounts of data and solving industry-specific use cases. The top use cases identified by the HPC centers interviewed are:

  • Searching databases (59 percent)
  • Investment risk analysis (45 percent)
  • Molecular modelling (41 percent)
  • Asset Management (32 percent)

Dr. Jan Goetz, CEO and Co-Founder of IQM Quantum Computers summarizes: “We work with some of the leading HPC centers in the world, and we planned this study to provide the quantum industry with a thorough understanding of the state of quantum at HPC centers globally. The strong investments for on-premise quantum computers, focus on skills gap and sustainability are very important findings from this study, and it will help IQM,

 

References and resources also include:

https://ieeexplore.ieee.org/document/9537178

https://atos.net/en/2021/press-release_2021_11_19/atos-and-iqm-study-finds-76-of-global-hpc-data-centers-to-use-quantum-computing-by-2023

About Rajesh Uppal

Check Also

India’s Advances in AI Weaponization Amid Global Military AI Race

As the global military landscape evolves with advancements in Artificial Intelligence (AI), India is making …

error: Content is protected !!