Home / Technology / AI & IT / Dedicated Hardware for Quantum Machine Learning to usher new era in AI by exponential speedup of applications

Dedicated Hardware for Quantum Machine Learning to usher new era in AI by exponential speedup of applications

Quantum computing and quantum information processing is expected to have immense impact by performing tasks too hard for even the most powerful conventional supercomputer and have a host of specific applications, from code-breaking and cyber security to medical diagnostics, big data analysis and logistics.

 

One of the areas where Quantum computing is predicted to play important role is Machine Learning (ML),  a subfield of Artificial Intelligence which attempts to endow computers with the capacity of learning from data, so that explicit programming is not necessary to perform a task. ML algorithms allow computers to extract information and infer patterns from the record data so computers can learn from previous examples to make good predictions about new ones. Machine Learning (ML) has now become a pervasive technology, underlying many modern applications including internet search, fraud detection, gaming, face detection, image tagging, brain mapping, check processing and computer server health-monitoring. Now researchers have turned to  power of quantum computers to solve complex machine learning applications.

 

Quantum machine learning is a new subfield within the field of quantum information that combines the speed of quantum computing with the ability to learn and adapt, as offered by machine learning.Quantum machine learning algorithms aim to use the advantages of quantum computation in order to improve classical methods of machine learning, for example by developing efficient implementations of expensive classical algorithms on a quantum computer. The promise is that quantum computers will allow for quick analysis and integration of our enormous data sets which will improve and transform our machine learning and artificial intelligence capabilities.

 

What this suggests is that as quantum computers get better at harnessing qubits and at entangling them, they’ll also get better at tackling machine-learning problems. However, to fully harness quantum computing for machine learning and artificial intelligence would require development of dedicated hardware and Quantum Machine Learning algorithms. Scientists have started exploring general quantum computers for machine learning as well as developing dedicated architectures for Quantum machine learning.

 

 

Rigetti  Demonstrates Unsupervised Machine Learning Using Their 19-Qubit Processor

Researchers at Rigetti Computing, a company based in Berkeley, California, used one of its prototype quantum chips—a superconducting device housed within an elaborate super-chilled setup—to run what’s known as a clustering algorithm. Clustering is a machine-learning technique used to organize data into similar groups. Rigetti is also making the new quantum computer—which can handle 19 quantum bits, or qubits—available through its cloud computing platform, called Forest.

 

The company’s scientists published a paper about the demonstration called “Unsupervised Machine Learning on a Hybrid Quantum Computer.” The abstract lays out the problem space and their approach : Machine learning techniques have led to broad adoption of a statistical model of computing. The statistical distributions natively available on quantum processors are a superset of those available classically. Harnessing this attribute has the potential to accelerate or otherwise improve machine learning relative to purely classical performance.

 

A key challenge toward that goal is learning to hybridize classical computing resources and traditional learning techniques with the emerging capabilities of general purpose quantum processors.  Scientists demonstrated such hybridization by training a 19-qubit gate model processor to solve a clustering problem, a foundational challenge in unsupervised learning. “We use the quantum approximate optimization algorithm in conjunction with a gradient-free Bayesian optimization to train the quantum machine. This quantum/classical hybrid algorithm shows robustness to realistic noise, and we find evidence that classical optimization can be used to train around both coherent and incoherent imperfections.”

The New QuantumFlow AI Architecture Boosts Deep Learning Speed By 10x-15x Faster

In Consumer Electronics Show (CES 2020) PQ Labs Inc introduces a QuantaFlow AI architecture which is first of its kind in the industry and it could change the future of AI and Deep Learning inference solutions. The new QuantaFlow AI architecture includes a classical RISC-V processor, a QuantaFlow Generator and a QF Evolution Space. The QuantaFlow AI SoC architecture is designed to simulate massive parallel transformation/evolution that is very similar to Quantum Computation. The QuantaFlow simulates a virtual transformation/evolution space for qf-bit registers.

 

A classical single-core RISC-V processor is implemented to provide logical control, results in observation retrieval, etc. The QuantaFlow Generator converts input data from low dimensional space to high dimensional space and then starts continuous transformation/evolution. The process is of minimum granularity, highly parallel in nature and asynchronous. By the end of the process, information needs to be extracted from the evolution space by the Bit Observer unit. In addition, Hot-Patching can be used to change the evolution path of qf-bits dynamically.

 

When a more significant deformation for the evolution space is needed, the RISC-V processor will issue a warm-“reboot” to the evolution space. All these operations can be executed in a blink of time. With dynamic operations, QuantaFlow is possible to run all kinds of neural network models e.g. ResNet-50 (2015), MobileNet (2017), EfficientNet (2019), etc.) without speed degradation or hitting the “memory wall.”

 

By comparison, GPUs and ASIC AI accelerators degrade performance in newer models (MobileNet, EfficientNet), because these new models are all memory-bound. With all the above efforts, QuantaFlow can achieve 10X speedup in ResNet-50 (batch=1, accuracy=93%, INT8) compared to Nvidia V100 in the same network configuration. For newer network models, there will be significantly higher speedups to be announced. QuantaFlow architecture is just a one-step further to explorer superior performance in AI deep learning inference. There are many devils in the details and innovation areas. The QuantaFlow architecture design flow is accelerated by high-level languages (instead of using Verilog) and implementation is optimized by in-house algorithms to extract maximum horsepower from silicon.

 

 

References and Resources also include:

https://www.facebook.com/romanoldschool?sk=wall

About Rajesh Uppal

Check Also

India’s Advances in AI Weaponization Amid Global Military AI Race

As the global military landscape evolves with advancements in Artificial Intelligence (AI), India is making …

error: Content is protected !!