Trending News
Home / Technology / AI & IT / Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping

Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping

Society is entering the age of extreme data, we generate  2.5 quintillion bytes daily, and by 2025, global data volumes are set to hit 163 zettabytes. With Moore’s Law coming to end , industry leaders are racing to develop quantum computers to process this extreme data.

 

Quantum computing is a different form of computation. Quantum computers, run on a subatomic level using quantum bits (or qubits) that can represent a 0 and a 1 at the same time. It uses three fundamental properties of quantum physics: superposition, interference, and entanglement. These properties allow processor using qubits could theoretically solve problems exponentially more quickly than a traditional computer for a small set of specialized problems.

 

Superposition refers to the quantum phenomenon where a quantum system can exist in multiple states concurrently. Quantum interference is what allows us to bias quantum systems toward the desired state. Entanglement is an extremely strong correlation between quantum particles. Entangled particles remain perfectly correlated even if separated by great distances. The idea is to create a pattern of interference where the paths leading to wrong answers interfere destructively and cancel out but the paths leading to the right answer reinforce each other.

 

Quantum computing and quantum information processing is expected to have immense impact by performing tasks too hard for even the most powerful conventional supercomputer and have a host of specific applications, from code-breaking and cyber security to medical diagnostics, big data analysis and logistics. Today, Google has a quantum computer they claim is 100 million times faster than any of today’s systems. That will be critical if we are going to be able to process the monumental amount of data we generate and solve very complex problems. This marks the beginning of the Noisy Intermediate-Scale Quantum (NISQ) computing era. In the coming years, quantum devices with tens-to-hundreds of noisy qubits are expected to become a reality.

 

One of the areas where Quantum computing is predicted to play important role is Machine Learning (ML),  a subfield of Artificial Intelligence which attempts to endow computers with the capacity of learning from data, so that explicit programming is not necessary to perform a task. ML algorithms allow computers to extract information and infer patterns from the record data so computers can learn from previous examples to make good predictions about new ones. Machine Learning (ML) has now become a pervasive technology, underlying many modern applications including internet search, fraud detection, gaming, face detection, image tagging, brain mapping, check processing and computer server health-monitoring.

 

Now researchers have turned to  power of quantum computers to solve complex machine learning applications. We need much more advanced AI if we want it to help us create things like truly autonomous self-driving cars and systems for accurately managing the traffic flow of an entire city in real-time. Many attempts to build this kind of software involve writing code that mimics the way neurons in the human brain work and combining many of these artificial neurons into a network. Each neuron mimics a decision-making process by taking a number of input signals and processing them to give an output corresponding to either “yes” or “no”.

 

Quantum Machine learning

Quantum machine learning is a new subfield within the field of quantum information that combines the speed of quantum computing with the ability to learn and adapt, as offered by machine learning.

 

What makes quantum computing so powerful is the algorithms it makes possible. These algorithms exhibit different complexity characteristics than their classical equivalents. Quantum computing is powerful because it promises to solve certain types of mathematical calculations with reduced complexity.

 

There are myriads of machine learning algorithms out there. But every one of these algorithms has three components: The Representation depicts the inner architecture the algorithm uses to represent the knowledge. It may consist of a set of rules, instances, decision trees, support vector machines, neural networks, and others. The Evaluation is a function to evaluate candidate algorithm parameterizations. Examples include accuracy, prediction and recall, squared error, posterior probability, cost, margin, entropy, and others. The Optimization describes the way of generating candidate algorithm parameterizations. It is known as the search process. For instance, combinatorial optimization, convex optimization, and constrained optimization.

 

Quantum machine learning - Wikipedia

 

Machine learning consists of two things: data and algorithms. Quantum machine learning is a term used to cover 4 types of scenarios:

  • Quantum-inspired classical algorithms on classical data: such as tensor network and de-quantized recommendation systems algorithms.
  • Classical algorithms are applied to quantum data: such as neural network-based quantum States and optimizing pulse sequences.
  • Quantum algorithms are applied to classical data: such as quantum optimization algorithms and quantum classification of classical data. The main characteristic of quantum computing is the ability to compute multiple states concurrently. A quantum optimization algorithm can combine all possible candidates and yield those that promise good results. Therefore, quantum computing promises to be exponentially faster than classical computers in the optimization of the algorithm.
  • Quantum algorithms are applied to quantum data: such as quantum signal processing and quantum hardware modeling.

 

Quantum data is any data source that occurs in a natural or artificial quantum system. This can be data generated by a quantum computer, like the samples gathered from the Sycamore processor for Google’s demonstration of quantum supremacy. Quantum data exhibits superposition and entanglement, leading to joint probability distributions that could require an exponential amount of classical computational resources to represent or store. The quantum supremacy experiment showed it is possible to sample from an extremely complex joint probability distribution of 2^53 Hilbert space.

 

The quantum data generated by NISQ processors are noisy and typically entangled just before the measurement occurs. Heuristic machine learning techniques can create models that maximize extraction of useful classical information from noisy entangled data. The TensorFlow Quantum (TFQ) library provides primitives to develop models that disentangle and generalize correlations in quantum data—opening up opportunities to improve existing quantum algorithms or discover new quantum algorithms.

 

In a research paper published in the journal Nature in March 2019, researchers from IBM and MIT show how an IBM quantum computer can accelerate a specific type of machine-learning task called feature matching. The team says that future quantum computers should allow machine learning to hit new levels of complexity.

 

“We’re at stage where we don’t have applications next month or next year, but we are in a very good position to explore the possibilities,” says Xiaodi Wu, an assistant professor at the University of Maryland’s Joint Center for Quantum Information and Computer Science. Wu says he expects practical applications to be discovered within a year or two.

 

Quantum parallelism can help train models more faster

One of the main power sources of quantum computers is their ability to perform quantum superposition. Which enables us to work on various quantum states at the same time. So, the argument here is, if we can train a model in a state of superposition of all possible training sets, then maybe the training process will be faster and more efficient.

Efficient here can mean one of two things:

  • Exponentially fewer data needed to train the model -> which researchers have found is inaccurate. However, some linear speedups may be possible in some cases.
  • Train models faster -> this claim follows the speedup resulted from quantizing any classical algorithm following Grover’s algorithm. The result is speedups up to quadratic at best and not exponential.

 

Researchers Prove that Robots Learn Faster with Quantum Technology, reported in March 2021

Within an international collaboration led by Philip Walther, a team of experimental physicists from the University of Vienna, together with theoreticians from the University of Innsbruck, the Austrian Academy of Sciences, the Leiden University, and the German Aerospace Center, have been successful in experimentally proving for the first time a speed-up in the actual robot’s learning time. The team has made use of single photons, the fundamental particles of light, coupled into an integrated photonic quantum processor, which was designed at the Massachusetts Institute of Technology. This processor was used as a robot and for implementing the learning tasks. Here, the robot would learn to route the single photons to a predefined direction. “The experiment could show that the learning time is significantly reduced compared to the case where no quantum physics is used”, says Valeria Saggio, first author of the publication.

 

In a nutshell, the experiment can be understood by imagining a robot standing at a crossroad, provided with the task of learning to always take the left turn. The robot learns by obtaining a reward when doing the correct move. Now, if the robot is placed in our usual classical world, then it will try either a left or right turn, and will be rewarded only if the left turn is chosen. In contrast, when the robot exploits quantum technology, the bizarre aspects of quantum physics come into play. The robot can now make use of one of its most famous and peculiar features, the so called superposition principle. This can be intuitively understood by imagining the robot taking the two turns, left and right, at the same time. “This key feature enables the implementation of a quantum search algorithm that reduces the number of trials for learning the correct path. As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” says Hans Briegel, who developed the theoretical ideas on quantum learning agents with his group at the University of Innsbruck.

 

Quantum Computers can model highly correlated distributions in a way classical computers can’t.

This is true, 100%. However, while it is correct, recent research results proved that this is insufficient for any quantum advantage. Moreover, it showed that some classical models could outperform quantum ones, even on datasets generated quantumly.

 

Therefore, to fully harness quantum computing for machine learning and artificial intelligence would require development of dedicated hardware and Quantum Machine Learning algorithms. Scientists have started exploring general quantum computers for machine learning as well as developing dedicated architectures for Quantum machine learning.

 

Demonstration of  quantum computing to machine learning in 2017

Researchers from Caltech and the University of Southern California (USC) reported in 2017,  the first application of quantum computing to machine learning. By employing quantum-compatible machine learning techniques, they developed a method of extracting a rare Higgs boson signal from copious noise data. Higgs is the particle that was predicted to imbue elementary particles with mass and was discovered at the Large Hadron Collider in 2012. The quantum program seeks patterns within a data set to tell meaningful data from junk. The new quantum machine learning method is found to perform well even with small data sets, unlike the standard counterparts.

 

The MIT-IBM researchers performed their simple calculation using a two-qubit quantum computer. Because the machine is so small, it doesn’t prove that bigger quantum computers will have a fundamental advantage over conventional ones, but it suggests that would be the case, The largest quantum computers available today have around 50 qubits, although not all of them can be used for computation because of the need to correct for errors that creep in as a result of the fragile nature of these quantum bits.

 

“We are still far off from achieving quantum advantage for machine learning,” the IBM researchers, led by Jay Gambetta, write in a blog post. “Yet the feature-mapping methods we’re advancing could soon be able to classify far more complex data sets than anything a classical computer could handle. What we’ve shown is a promising path forward.”

 

“My colleagues and I instead hope to build the first dedicated neural network computer, using the latest ‘quantum’ technology rather than AI software,” wrote Michael Hartmann, a professor at Heriot-Watt University who’s leading the research, in a new essay for The Conversation. “By combining these two branches of computing, we hope to produce a breakthrough which leads to AI that operates at unprecedented speed, automatically making very complex decisions in a very short time.”

 

Our Quromorphic project aims to radically speed up this process and boost the amount of input data that can be processed by building neural networks that work on the principles of quantum mechanics. These networks will not be coded in software, but directly built in hardware made of superconducting electrical circuits. We expect that this will make it easier to scale them up without errors.

 

“To put the technology to its full use will involve creating larger devices, a process that may take ten years or more as many technical details need to be very precisely controlled to avoid computational errors,” Hartmann wrote. “But once we have shown that quantum neural networks can be more powerful than classical AI software in a real world application, it would very quickly become some of the most important technology out there.”

 

D-Wave Quantum computer learned to ‘see’ trees in 2017

Scientists have trained a quantum computer to recognize trees a step towards using such computers for complicated machine learning problems like pattern recognition and computer vision. For example, team member Ramakrishna Nemani, an earth scientist at NASA’s Advanced Supercomputer Division in Mountain View, California says the study lays the groundwork for better climate forecasting.

 

By poring over NASA’s satellite imagery, quantum processors could take a machine learning approach to uncover new patterns in how weather moves across the world over the course of weeks, months, or even years, he says. “Say you’re living in India—you might get an advance notice of a cyclone 6 months ahead of time because we see a pattern of weather in northern Canada.”

 

In the new study, physicist Edward Boyda of St. Mary’s College of California in Moraga and colleagues fed hundreds of NASA satellite images of California into the D-Wave 2X processor, which contains 1152 qubits. The researchers asked the computer to consider dozens of features—hue, saturation, even light reflectance—to determine whether clumps of pixels were trees as opposed to roads, buildings, or rivers. They then told the computer whether its classifications were right or wrong so that the computer could learn from its mistakes, tweaking the formula it uses to determine whether something is a tree.

 

After it was trained, the D-Wave was 90% accurate in recognizing trees in aerial photographs of Mill Valley, California, the team reports in PLOS ONE. “Classification is a tricky problem; there are short trees, tall trees, trees next to each other, next to buildings—all sorts of combinations,” says Nemani.

 

If quantum computers have speedups in linear algebra subroutines, it can speed up machine learning.

We all know that linear algebra is the core of machine learning. In particular, a group of linear algebra applications called BLAS (Basic Linear Algebra Subroutines) is the fundamentals of all machine learning algorithms. These subroutines include matrix multiplication, Fourier transforms, and solving linear systems.

 

Quantum machine learning algorithms aim to use the advantages of quantum computation in order to improve classical methods of machine learning, for example by developing efficient implementations of expensive classical algorithms on a quantum computer. The promise is that quantum computers will allow for quick analysis and integration of our enormous data sets which will improve and transform our machine learning and artificial intelligence capabilities.

 

Speaking at MIT Technology Review’s EmTech Digital conference in San Francisco, Dario Gil of IBM said that quantum computers, which take advantage of the mind-bending phenomena of quantum physics, could have a big impact on one of the hottest fields in technology: artificial intelligence.

 

During his talk, Gil, who oversees IBM’s AI research efforts and its commercial quantum computing program, presented the results of a simple classification experiment that involves using machine learning to organize data into similar groups (in this case, dots with similar colors). IBM’s team first ran the task on a quantum machine without entangling the qubits, which produced an error rate of 5 percent. The second time around it ran the same experiment with the qubits entangled, which produced an error rate of just 2.5 percent.

 

What this suggests is that as quantum computers get better at harnessing qubits and at entangling them, they’ll also get better at tackling machine-learning problems.  All these subroutines do obtain exponential speedups when ran on a quantum computer. However, to obtain these speedups, we have to have a quantum memory the holds quantum data and communicates with a quantum processor. Then, and only then, we can reach exponential speedups. Currently, our systems are not pure quantum; our data is classical and is stored in a classical memory. This data is then communicated to a quantum processor. The communication between classical memory and the quantum processor is why an exponential speedup can’t be reached.

 

Quantum Machine Learning algorithms

Recently, however, a new family of quantum algorithms has come along to challenge this relatively-narrow view of what a quantum computer would be useful for. Not only do these new algorithms promise exponential speedups over classical algorithms, but they do so for eminently-practical problems, involving machine learning, clustering, classification, and finding patterns in huge amounts of data, writes Scott Aaronson. “The algorithm at the center of the “quantum machine learning” mini-revolution is called HHL, after my colleagues Aram Harrow, Avinatan Hassidim, and Seth Lloyd, who invented it in 2008.”

 

“HHL attacks one of the most basic problems in all of science: namely, solving a system of linear equations. Given an n × n real matrix A and a vector b, the goal of HHL is to (approximately) solve the system Ax = b for x, and to do so in an amount of time that scales only logarithmically with n, the number of equations and unknowns. Classically, this goal seems hopeless, since n2 steps would be needed even to examine all the entries of A, and n steps would be needed even to write down the solution vector x. By contrast, by exploiting the exponential character of the wave function, HHL promises to solve a system of n equations in only about log n steps.”

 

In the years since HHL, quantum algorithms achieving “exponential speedups over classical algorithms” have been proposed for other major application areas, including k-means clustering , support vector machines , data fitting, and even computing certain properties of Google PageRank vectors.

 

Quantum algorithm can analyse huge matrices enabling AI  to think faster

The first quantum linear system algorithm was proposed in 2009 by a different group of researchers. That algorithm kick-started research into quantum forms of machine learning, or artificial intelligence.

 

A linear system algorithm works on a large matrix of data. For example, a trader might be trying to predict the future price of goods. The matrix may capture historical data about price movements over time and data about features that could be influencing these prices, such as currency exchange rates. The algorithm calculates how strongly each feature is correlated with another by ‘inverting’ the matrix. This information can then be used to extrapolate into the future.

 

“There is a lot of computation involved in analysing the matrix. When it gets beyond say 10,000 by 10,000 entries, it becomes hard for classical computers,” explains Zhao. This is because the number of computational steps goes up rapidly with the number of elements in the matrix: every doubling of the matrix size increases the length of the calculation eight-fold.

 

The 2009 algorithm could cope better with bigger matrices, but only if the data in them is what’s known as ‘sparse’. In these cases, there are limited relationships among the elements, which is often not true of real-world data. Zhao, Prakash and Wossnig present a new algorithm that is faster than both the classical and the previous quantum versions, without restrictions on the kind of data it works for. Anupam Prakash at the Centre for Quantum Technologies, National University of Singapore, and collaborator Leonard Wossnig, then at ETH Zurich and the University of Oxford. Zhao is a PhD student with the Singapore University of Technology and Design.

 

As a rough guide, for a 10,000 square matrix, the classical algorithm would take on the order of a trillion computational steps, the first quantum algorithm some 10,000s of steps and the new quantum algorithm just 100s of steps. The algorithm relies on a technique known as quantum singular value estimation.

 

There have been a few proof-of-principle demonstrations of the earlier quantum linear system algorithm on small-scale quantum computers. Zhao and his colleagues hope to work with an experimental group to run a proof-of-principle demonstration of their algorithm, too. They also want to do a full analysis of the effort required to implement the algorithm, checking what overhead costs there may be.

 

To show a real quantum advantage over the classical algorithms will need bigger quantum computers. Zhao estimates that “We’re maybe looking at three to five years in the future when we can actually use the hardware built by the experimentalists to do meaningful quantum computation with application in artificial intelligence.”

 

Quantum machine learning over infinite dimensions

Researchers have recently generalized quantum machine learning to the more complex, but still remarkably practical, infinite-dimensional systems. Researchers present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, they also map out an experimental implementation which can be used as a blueprint for future photonic demonstrations.

 

Physicists have developed a quantum machine learning algorithm that can handle infinite dimensions—that is, it works with continuous variables (which have an infinite number of possible values on a closed interval) instead of the typically used discrete variables (which have only a finite number of values).

 

The researchers, HoiKwan Lau et al., have published a paper on generalizing quantum machine learning to infinite dimensions in Physical Review Letters. As the physicists explain, quantum machine learning is a new subfield within the field of quantum information that combines the speed of quantum computing with the ability to learn and adapt, as offered by machine learning.

 

One of the biggest advantages of having a quantum machine learning algorithm for continuous variables is that it can theoretically operate much faster than classical algorithms. Since many science and engineering models involve continuous variables, applying quantum machine learning to these problems could potentially have far-reaching applications.

 

Although the results of the study are purely theoretical, the physicists expect that the new algorithm for continuous variables could be experimentally implemented using currently available technology. The implementation could be done in several ways, such as by using optical systems, spin systems, or trapped atoms.

 

Cambridge Quantum Computing Pioneers Quantum Machine Learning Methods for Reasoning, reported in March 2021

Scientists at Cambridge Quantum Computing (CQC) have developed methods and demonstrated that quantum machines can learn to infer hidden information from very general probabilistic reasoning models. These methods could improve a broad range of applications, where reasoning in complex systems and quantifying uncertainty are crucial. Examples include medical diagnosis, fault-detection in mission-critical machines, or financial forecasting for investment management.

 

In this paper published on the pre-print repository arXiv, CQC researchers established that quantum computers can learn to deal with the uncertainty that is typical of real-world scenarios, and which humans can often handle in an intuitive way. The research team has been led by Dr. Marcello Benedetti with co-authors Brian Coyle, Dr. Michael Lubasch, and Dr. Matthias Rosenkranz, and is part of the Quantum Machine Learning division of CQC, headed by Dr. Mattia Fiorentini.

 

The paper implements three proofs of principle on simulators and on an IBM Q quantum computer to demonstrate quantum-assisted reasoning on:

inference on random instances of a textbook Bayesian network
inferring market regime switches in a hidden Markov model of a simulated financial time series
a medical diagnosis task known as the “lung cancer” problem.
The proofs of principle suggest quantum machines using highly expressive inference models could enable new applications in diverse fields. The paper draws on the fact that sampling from complex distributions is considered among the most promising ways towards a quantum advantage in machine learning with today’s noisy quantum devices. This pioneering work indicates how quantum computing, even in its current early stage, is an effective tool for studying science’s most ambitious questions such as the emulation of human reasoning.

 

“Classical computers in particular, are very good at executing procedural tasks, they’re not good at modelling probability, modelling uncertainty,” he explains. But he says quantum computers will, by their nature, be capable of dealing with a range of probabilities – “so there seems to be a sort of natural match here.” The hope is that this new type of computer will be able to perform well in areas where there is plenty of uncertainty, from diagnosing medical conditions from scans to predicting where financial markets are heading.

 

Machine learning scientists across industries and quantum software and hardware developers are the groups of researchers that should benefit the most from this development in the near-term. This Medium article accompanies the scientific paper and provides an accessible exposition of the principles behind this pioneering work, as well as descriptions of the proofs of principle implemented by the team.

 

With quantum devices set to improve in the coming years, this research lays the groundwork for quantum computing to be applied to probabilistic reasoning and its direct application in engineering and business-relevant problems.

References and Resources also include:

 

Cite This Article

 
International Defense Security & Technology (September 28, 2022) Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping. Retrieved from https://idstch.com/technology/ict/quantum-machine-learning-promise-exponential-speedup-internet-search-fraud-detection-gaming-face-detection-image-tagging-brain-mapping/.
"Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping." International Defense Security & Technology - September 28, 2022, https://idstch.com/technology/ict/quantum-machine-learning-promise-exponential-speedup-internet-search-fraud-detection-gaming-face-detection-image-tagging-brain-mapping/
International Defense Security & Technology June 21, 2021 Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping., viewed September 28, 2022,<https://idstch.com/technology/ict/quantum-machine-learning-promise-exponential-speedup-internet-search-fraud-detection-gaming-face-detection-image-tagging-brain-mapping/>
International Defense Security & Technology - Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping. [Internet]. [Accessed September 28, 2022]. Available from: https://idstch.com/technology/ict/quantum-machine-learning-promise-exponential-speedup-internet-search-fraud-detection-gaming-face-detection-image-tagging-brain-mapping/
"Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping." International Defense Security & Technology - Accessed September 28, 2022. https://idstch.com/technology/ict/quantum-machine-learning-promise-exponential-speedup-internet-search-fraud-detection-gaming-face-detection-image-tagging-brain-mapping/
"Quantum Machine Learning to usher new era in AI by exponential speedup of internet search, fraud detection, and brain mapping." International Defense Security & Technology [Online]. Available: https://idstch.com/technology/ict/quantum-machine-learning-promise-exponential-speedup-internet-search-fraud-detection-gaming-face-detection-image-tagging-brain-mapping/. [Accessed: September 28, 2022]

About Rajesh Uppal

Check Also

Industrial IIOT

The industrial internet of things (IIoT) refers to the extension and use of the internet …

error: Content is protected !!