It may still be decades before quantum computers are ready to solve problems that today’s classical computers aren’t fast or efficient enough to solve, but the emerging “probabilistic computer” could bridge the gap between classical and quantum computing.
Our minds are able to explore vast spaces of possible thoughts, perceptions, and explanations, and identify the probable and useful ones in milliseconds. To emulate these capacities, Researchers are building a new generation of probabilistic computing systems that integrate probability and randomness into the basic building blocks of software and hardware.
Probabilistic reasoning has been used for a wide variety of tasks such as predicting stock prices, recommending movies, diagnosing computers, detecting cyber intrusions and image detection. However, until recently (partially due to limited computing power), probabilistic programming was limited in scope, and most inference algorithms had to be written manually for each task.
It can be used to create systems that help make decisions in the face of uncertainty. This is why probabilistic computing is one key component to AI and central to addressing these challenges. Probabilistic computing will allow future systems to comprehend and compute with uncertainties inherent in natural data, which will enable us to build computers capable of understanding, predicting and decision-making.
A key barrier to AI today is that natural data fed to a computer is largely unstructured and “noisy, ”says Dr. Michael Mayberry the chief technology officer for Intel Corporation.
It’s easy for humans to sort through natural data. For example: If you are driving a car on a residential street and see a ball roll in front of you, you would stop, assuming there is a small child not far behind that ball. Computers today don’t do this. They are built to assist humans with precise productivity tasks. Making computers efficient at dealing with probabilities at scale is central to our ability to transform current systems and applications from advanced computational aids into intelligent partners for understanding and decision-making.
According to Intel CTO Mike Mayberry, the original wave of AI is based on logic and this is based on writing down rules — known as ‘classical reasoning’. In probabilistic computing, the energy spent by the processing units is lowered, resulting in an increase of the probability that some operations might go wrong.
According to USFCA, probabilistic computers take a simulation problem (forward) into an inference program (reverse). Berkeley headquartered Navia Systems that develops probabilistic computers defines the technology as best suited to making judgements in the presence of uncertainty just like traditional computing technology is to large-scale record keeping. The startup founded in 2007, emphasizes that unlike current computers built for logical deduction and precise arithmetic, in probabilistic computing, machines and programs are built to handle ambiguity and learn from experience.
Probabilistic reasoning for military situation assessment
The use of probabilistic reasoning is a key capability in information fusion systems for a variety of domains such as military situation assessment. Decision making in time-critical, high stress, information overloaded environments, such as the tactical military domain, is a complex research problem that can benefit from the application of information fusion techniques. Information fusion is the process of acquiring, aligning, correlating, associating and combining relevant information from various sources into one or more representational formats appropriate for interpreting the information. Signal data provides only a partial picture of the battle space. It may be incomplete, incorrect, contradictory or uncertain and may have various degrees of latency. It may also be affected by the environment or by enemy deception or confusion, which creates false or misleading data. In order to derive Situation Assessments from signal data, we need to model the battlespace to reason about the location, status and the relationships which exist between military units in the battlespace.
Bayesian Networks are a popular technique which have been used in the military domain to reason about causal and perceptual relationships between objects in the battlespace. However, BNs are rigid: they model the domain with a predefined set of random variables and a fixed topology which applies to all problem instances of the domain. Hence, they cannot represent uncertainty about the existence, number or configuration of objects in the battlespace. Researchers are applying Probabilistic reasoning methods in military situation assessment including Probabilistic Relational Models and Object Oriented Probabilistic Relational Models.
Research into probabilistic computing is not a new area of study, but the improvements in high-performance computing and deep learning algorithms may lead probabilistic computing into a new era. In the next few years, we expect that research in probabilistic computing will lead to significant improvements in the reliability, security, serviceability and performance of AI systems, including hardware designed specifically for probabilistic computing. These advancements are critical to deploying applications into the real world – from smart homes to smart cities.
Engineers at Purdue University and Tohoku University build probabilistic computing hardware
Engineers at Purdue University and Tohoku University in Japan have built the first hardware to demonstrate how the fundamental units of what would be a probabilistic computer—called p-bits—are capable of performing a calculation that quantum computers would usually be called upon to perform.
The study, published in Nature on Wednesday (Sept. 2019), introduces a device that serves as a basis for building probabilistic computers to more efficiently solve problems in areas such as drug research, encryption and cybersecurity, financial services, data analysis and supply chain logistics.
Today’s computers store and use information in the form of zeroes and ones called bits. Quantum computers use qubits that can be both zero and one at the same time. In 2017, a Purdue research group led by Supriyo Datta, the university’s Thomas Duncan Distinguished Professor of Electrical and Computer Engineering, proposed the idea of a probabilistic computer using p-bits that can be either zero or one at any given time and fluctuate rapidly between the two.
“There is a useful subset of problems solvable with qubits that can also be solved with p-bits. You might say that a p-bit is a ‘poor man’s qubit,'” Datta said.
Whereas qubits need really cold temperatures to operate, p-bits work at room temperature like today’s electronics, so existing hardware could be adapted to build a probabilistic computer, the researchers say.
The team built a device that is a modified version of magnetoresistive random-access memory, or MRAM, which some types of computers use today to store information. The technology uses the orientation of magnets to create states of resistance corresponding to zero or one.
Tohoku University researchers William Borders, Shusuke Fukami and Hideo Ohno altered an MRAM device, making it intentionally unstable to better facilitate the ability of p-bits to fluctuate. Purdue researchers combined this device with a transistor to build a three-terminal unit whose fluctuations could be controlled. Eight such p-bit units were interconnected to build a probabilistic computer.
The circuit successfully solved what is often considered a “quantum” problem: Breaking down, or factoring, numbers such as 35,161 and 945 into smaller numbers, a calculation known as integer factorization. These calculations are well within the capabilities of today’s classical computers, but the researchers believe that the probabilistic approach demonstrated in this paper would take up much less space and energy.
“On a chip, this circuit would take up the same area as a transistor, but perform a function that would have taken thousands of transistors to perform. It also operates in a manner that could speed up calculation through the parallel operation of a large number of p-bits,” said Ahmed Zeeshan Pervaiz, a Ph.D. student in electrical and computer engineering at Purdue.
Realistically, hundreds of p-bits would be needed to solve bigger problems—but that’s not too far off, the researchers say.
“In the near future, p-bits could better help a machine to learn like a human does or optimize a route for goods to travel to market,” said Kerem Camsari, a Purdue postdoctoral associate in electrical and computer engineering.
Probabilistic programming (PP)
Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the former easier and more widely applicable.
Nevertheless, in 2015, a 50-line probabilistic computer vision program was used to generate 3D models of human faces based on 2D images of those faces. The program used inverse graphics as the basis of its inference method, and was built using the Picture package in Julia. This made possible “in 50 lines of code what used to take thousands”. More recent work using the Gen programming system (also written in Julia) has applied probabilistic programming to a wide variety of tasks.
Probabilistic programming has also been combined with differentiable programming using the Julia package Zygote.jl, allowing it to be applied to additional tasks in which parts of the model need to be differentiated. The use of differentiable programming can also allow for the easier implementation of gradient based MCMC inference methods such as HMC.