Trending News
Home / Technology / AI & IT / Dedicated AI chips or adaptive Neuromorphic ( brain like ) computing chips for real time machine learning, is the next big thing

Dedicated AI chips or adaptive Neuromorphic ( brain like ) computing chips for real time machine learning, is the next big thing

Artificial Intelligence technologies aim to develop computers, or robots that surpass the abilities of human intelligence in tasks such as learning and adaptation, reasoning and planning, decision making and autonomy; creativity; extracting knowledge, and making predictions from data.

 

AI includes both logic-based and statistical approaches. Within AI is a large subfield called machine learning. Machine Learning (ML) is a subfield of Artificial Intelligence that attempts to endow computers with the capacity of learning from data so that explicit programming is not necessary to perform a task.

 

ML uses algorithms that learn how to perform classification or problem solving via a process called training, to handle each new problem. Algorithms such as neural networks, support vector machines, or reinforcement learning extract information and infer patterns from the record data so computers can learn from previous examples to make good predictions about new ones.

 

Some of these algorithms are neural networks, support vector machines, or reinforcement learning. For training, they require data sets covering hundreds or even thousands of relevant features., For this reason, painstaking selection, extraction, and curation of feature sets for learning is often required.

 

To solve this challenge, Scientists turned to subfield within the machine learning field, brain-inspired computation which is a program or algorithm that takes some aspects of its basic form or functionality from the way the brain works.

 

The main computational element of the brain is the neuron. There are approximately 86 billion neurons in the average human brain. The neurons are connected together with a number of elements entering them called dendrites and an element leaving them called an axon. The neuron accepts the signals entering it via the dendrites, performs a computation on those signals, and generates a signal on the axon. These input and output signals are referred to as activations. The axon of one neuron branches out and is connected to the dendrites of many other neurons. The connections between a branch of the axon and a dendrite is called a synapse. There are estimated to be 1014 to 1015 synapses in the average human brain.

 

Neural networks, also known as artificial neural networks (ANNs) are typically organized into layers, containing an input layer, one or more hidden layers, and an output layer. Each layer contains a large number of processing nodes or artificial neurons, and each node has an associated weight and threshold.  Data come in the input layer and are divided up among the nodes and propagate to the neurons in the middle layer of the network, which is also frequently called a “hidden layer.”  Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The weighted sums from one or more hidden layers are ultimately propagated to the output layer, which presents the final outputs of the network to the user. The output of the final layer yields the solution to some computational problem.

 

Within the domain of neural networks, there is an area called deep learning, in which the neural networks have more than three layers, typically ranging from five to more than a thousand.  DNNs are capable of learning high-level features with more complexity and abstraction than shallower neural networks. DNN also addressed the limitation of machine learning, to not only learn classifications but also learn relevant features. This capability allows deep learning systems to be trained using relatively unprocessed data (e.g., image, video, or audio data) rather than feature-based training sets.

 

To do this, deep learning requires massive training sets that may be an order of magnitude larger than those needed for other machine learning algorithms. When such data is available, deep learning systems typically perform significantly better than all other methods.  Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.

 

The  major factors accounting for the recent success of deep neural network is the significant leap in the availability of computational processing power. When Google’s computers roundly beat the world-class Go champion Lee Sedol, it marked a milestone in artificial intelligence. The winning computer program, created by researchers at Google DeepMind in London, used an artificial neural network that took advantage of what’s known as deep learning, a strategy by which neural networks involving many layers of processing are configured in an automated fashion to solve the problem at hand. In addition the computers Google used to defeat Sedol contained special-purpose hardware—a computer card Google calls its Tensor Processing Unit. Reportedly it uses application-specific integrated circuit, or ASIC to speed up deep-learning calculations.

 

Both training and execution of large-scale DNNs require vast computing resources, leading to high power requirements and communication overhead.  Scott Leishman, a computer scientist at Nervana, notes that another computationally intensive task—bitcoin mining—went from being run on CPUs to GPUs to FPGAs and, finally, on ASICs because of the gains in power efficiency from such customization. “I see the same thing happening for deep learning,” he says. Researchers are also developing neuromorphic chips based on silicon photonics and memristors.

 

Tsinghua University and Beijing Innovation Center for Future Chips (BICFC)  published their White Paper on AI Chip Technologies, to inform readers about the competing technologies and the development trends of AI chips. The paper mainly discusses three types of AI chips: Universal chips that can support AI applications efficiently through hardware and software optimization, such as GPU; machine learning accelerators geared towards neural networks and deep learning, such as TPU; and emerging computing chips inspired by biological brains, such as neuromorphic chips.

 

Cloud-based AI chips such as Nvidia’s GPU and Google’s TPU feature high performance and large memory bandwidth. They mainly process computations, where requirements include accuracy, parallelism and data volume. The main priorities for edge-based or Mobile based AI chips are energy efficiency, response time, cost and privacy. The paper notes that while the training of neural networks is still done on the cloud, their inferencing is increasingly being executed on the edge. Right now “Right now, the networks are pretty complex and are mostly run on high-power GPUs in the cloud and mobile devices can access  through a Wi-Fi connection.

 

AI chip development is being constrained by two bottlenecks: The Von Neumann bottleneck refers to the significant latency and energy overhead when Von-Neumann-based chips transfer massive amounts of data between storage and memory. This is a growing problem as the data used in AI applications has increased by orders of magnitude. The other bottleneck involves CMOS processes and devices. Moore’s Law is losing its pace, and future aggregative dimensional scalings of silicon CMOS are expected to reduce in effectiveness.

 

New requirements driving development of Neuromorphic chips

A grand challenge in computing is the creation of machines that can proactively interpret and learn from data in real time, solve unfamiliar problems using what they have learned, and operate with the energy efficiency of the human brain. While complex machine-learning algorithms and advanced electronic hardware  that can support large-scale learning have been realized in recent years and support applications such as speech recognition and computer vision, emerging computing challenges require real-time learning, prediction, and automated decision-making in diverse domains such as autonomous vehicles, military applications, healthcare informatics and business analytics.

 

A salient feature of these emerging domains is the large and continuously streaming data sets that these applications generate, which must be processed efficiently enough to support real-time learning and decision making based on these data. This challenge requires novel hardware techniques and machine-learning architectures. This solicitation seeks to lay the foundation for next-generation co-design of RTML algorithms and hardware, with the principal focus on developing novel hardware architectures and learning algorithms in which all stages of training (including incremental training, hyperparameter estimation, and deployment) can be performed in real time.

 

Now companies like MIT and IBM are introducing next generation of neuromorphic chips that shall be run on mobile themselves. Over the coming years, deep-learning software will increasingly find its way into applications for smartphones, where it is already used, for example, to detect malware or translate text in images. For those applications, the key will be low-power ASICs.  The drone manufacturer DJI is already using something akin to a deep-learning ASIC in its Phantom 4 drone, which uses a special visual-processing chip made by California-based Movidius to recognize obstructions.

 

Scientists from China’s Zhejiang province have developed a computer chip that works much like the brain, the media reported. TrueNorth, an advanced brain-like chip developed by IBM, is on the US’ technology embargo list to China, so Chinese scientists had to start their research from scratch.

 

A new U.S. research initiative seeks to develop a processor capable of real-time learning while operating with the “efficiency of the human brain.” The National Science Foundation (NSF) and the Defense Advanced Research Projects Agency jointly announced a “Real Time Machine Learning” project on March 2019 soliciting industry proposals for “foundational breakthroughs” in hardware required to “build systems that respond and adapt in real time.”

 

The National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA) are teaming up through this Real-Time Machine Learning (RTML) program to explore high-performance, energy-efficient hardware and machine-learning architectures that can learn from a continuous stream of new data in real time, through opportunities for post-award collaboration between researchers supported by DARPA and NSF.

 

The AI chips will also enable military applications.  Air Force Research Lab (AFRL) reports good results from using a “neuromorphic” chip made by IBM to identify military and civilian vehicles in radar-generated aerial imagery. The unconventional chip got the job done about as accurately as a regular high-powered computer, using less than a 20th of the energy. The AFRL awarded IBM a contract worth $550,000 in 2014 to become the first paying customer of its brain-inspired TrueNorth chip. It processes data using a network of one million elements designed to mimic the neurons of a mammalian brain, connected by 256 million “synapses.”

 

The  enhanced power efficiency of neuromorphic allows deploying advanced machine vision, which usually requires a lot of computing power, in places where resources and space are limited. Satellites, high-altitude aircraft, air bases reliant on generators, and small drones could all benefit, says AFRL principal electronics engineer Qing Wu. “Air Force mission domains are air, space, and cyberspace. [All are] very sensitive to power constraints,” he says.

 

Neuromorphic chips

The industry is exploring next-generation chip architectures such as on-chip memories or neuromorphic chips, to reduce the significant costs of data exchange. Neuromorphic computing achieves this brainlike function and efficiency by building artificial neural systems that implement “neurons” (the actual nodes that process information) and “synapses” (the connections between those nodes) to transfer electrical signals using analog circuitry. This enables them to modulate the amount of electricity flowing between those nodes to mimic the varying degrees of strength that naturally occurring brain signals have.

 

Within the brain-inspired computing paradigm, there is a subarea called spiking computing. In this subarea, inspiration is taken from the fact that the communication on the dendrites and axons are spike-like pulses and that the information being conveyed is not just based on a spike’s amplitude. Instead, it also depends on the time the pulse arrives and that the computation that happens in the neuron is a function of not just a single value but the width of pulse and the timing relationship between different pulses.

 

Spiking neural networks can be implemented in software on traditional processors. But it’s also possible to implement them through hardware, as Intel is doing with Loihi. Another example of a project that was inspired by the spiking of the brain is the IBM TrueNorth. Neuromorphic systems also introduce a new chip architecture that collocates memory and processing together on each individual neuron instead of having separate designated areas for each.

 

A traditional computer chip architecture (known as the von Neumann architecture) typically has a separate memory unit (MU), central processing unit (CPU) and data paths. This means that information needs to be shuttled back and forth repeatedly between these different components as the computer completes a given task. This creates a bottleneck for time and energy efficiency — known as the von Neumann bottleneck.

 

By collocating memory, a neuromorphic chip can process information in a much more efficient way and enables chips to be simultaneously very powerful and very efficient. Each individual neuron can perform either processing or memory, depending on the task at hand.

 

As traditional processors struggle to meet the demands of compute-intensive artificial intelligence applications, dedicated AI chips are playing an increasingly important role in research, development, and on the cloud and edge.

 

Implementation approaches for neuromorphic computing vary but broadly divide into those trying to use conventional digital circuits (e.g. SpiNNaker) and those trying to actually ‘create’ analog neurons in silicon (e.g. BrainScaleS).

 

UK’s Graphcore already making an impact.

Graphcore is a British semiconductor company that develops accelerators for AI and machine learning. It aims to make a massively parallel Intelligence Processing Unit (IPU) that holds the complete machine learning model inside the processor.

 

In July 2017, Graphcore announced their first chip, called the Colossus GC2, a “16 nm massively parallel, mixed-precision floating point processor”, first available in 2018.Packaged with two chips on a single PCI Express card called the Graphcore C2 IPU, it is stated to perform the same role as a GPU in conjunction with standard machine learning frameworks such as TensorFlow. The device relies on scratchpad memory for its performance rather than traditional cache hierarchies

 

In July 2020, Graphcore presented hardware using a second generation processor called GC200 built in TSMC’s 7nm FinFET manufacturing process. GC200 is a 59 billion transistor, 823 square-millimeter integrated circuit with 1,472 computational cores and 900 Mbyte of local memories.

 

Both the older and newer chips can use 6 threads per tile (for 8,832 threads in total, per GC200 chip) “MIMD (Multiple Instruction, Multiple Data) parallelism and has distributed, local memory as its only form of memory on the device” (except for registers), and the newer GC200 chip has about 630 MB per tile, vs 256 KiB per tile in older C2 chip, that are arranged into islands (4 tiles per island), that are arranged into columns, and latency is best within tile. The IPU uses IEEE FP16, with stocastic rounding, and also single-precision FP32, at lower performance.  Code and data executed locally must must fit in a tile, but with message-passing, all on-chip or off-chip memory can be used, and software for AI makes it transparently possible, e.g. has PyTorch support.

 

IBM created TrueNorth computer chip in 2014, that was nearly as intelligent as a Honey Bee

In 2014, Scientists at IBM Research advanced neuromorphic (brain-like) computer chip, called TrueNorth, consisting of 1 million programmable neurons and 256 million programmable synapses, comparable to the brain of Honey bee that contains 960,000 neurons and ~ 109 synapse.

 

“The [TrueNorth] chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt – literally a synaptic supercomputer in your palm,” noted Dharmendra Modha, who leads development of IBM’s brain-inspired chips. “A hypothetical computer to run [a human-scale] simulation in real-time would require 12GW, whereas the human brain consumes merely 20W.”

 

Massimiliano Versace, who directs the Boston University Neuromorphics Lab and worked on another part of the Pentagon contract that funded IBM’s work on TrueNorth, says the results are promising. But he notes that IBM’s chip currently comes with trade-offs.

 

It is much easier to deploy neural networks on conventional computers, thanks to software made available by Nvidia, Google, and others. And IBM’s unusual chip is much more expensive. “Ease of use and price are [the] two main factors rowing against specialized neuromorphic chips,” says Versace.

Non-von Neumann Architecture

This feat was made possible by scalable, and flexible, non–von Neumann architecture that stores and processes information in a distributed, massively parallel way. The information flows by way of neural spikes, from axons to neurons, modulated by the programmable synapses between them.According to IBM its unique architecture could solve “a wide class of problems from vision, audition, and multi-sensory fusion, and has the potential to revolutionize the computer industry by integrating brain-like capability into devices where computation is constrained by power and speed.”

 

The classical von Neumann architecture, in which the processing of information and the storage of information are kept separate, has now faced a performance bottleneck. Data travels to and from the processor and memory—but the computer can’t process and store at the same time. By the nature of the architecture, it’s a linear process, and ultimately leads to the von Neuman “bottleneck.”

 

The chip was built on Samsung’s standard CMOS 28nm process, containing 5.4 billion transistors, with 4096 neurosynaptic cores interconnected via an intrachip network. The chip was built under DARPA’s SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program, which has funded the project since 2008 with approximately $53m (£31.5m) of investment.

 

The chip shall be useful in many applications that use complex neural networks in real time, for example, multiobject detection and classification, search and rescue operations by helping robots to identify people, and new computing platform for mobile, cloud, and distributed sensor applications. IBM researchers suggest that traditional computers work like the left side of our brain, similar to a fast number-crunching calculator. They compare TrueNorth to right side of our brain, likening the system to “slow, sensory, pattern recognizing machines.”

IBM’s New Do-It-All Deep-Learning Chip

Experts recognize that neural nets can get a lot of computation done with little energy if a chip approximates an answer using low-precision math. That’s especially useful in mobile and other power-constrained devices. But some tasks, especially training a neural net to do something, still need precision.

 

The disconnect between the needs of training a neural net and having that net execute its function, called inference, has been one of the big challenges for those designing chips that accelerate AI functions. IBM’s new AI accelerator chip is capable of what the company calls scaled precision. That is, it can do both training and inference at 32, 16, or even 1 or 2 bits.

 

“The most advanced precision that you can do for training is 16 bits, and the most advanced you can do for inference is 2 bits,” explains Kailash Gopalakrishnan, the distinguished member of the technical staff at IBM’s Yorktown Heights research center who led the effort.  IBM recently revealed its newest solution, still a prototype, at the IEEE VLSI Symposia: a chip that does both equally well. “This chip potentially covers the best of training known today and the best of inference known today.”

 

The chip’s ability to do all of this stems from two innovations that are both aimed at the same outcome—keeping all the processor components fed with data and working. “One of the challenges that you have with traditional [chip] architectures when it comes to deep learning is that the utilization is typically very low,” says Gopalakrishnan. That is, even though a chip might be capable of a very high peak performance, typically only 20 to 30 percent of its resources can really be brought to bear on a problem. IBM aimed for 90 percent, for all tasks, all the time.

 

Low utilization is usually due to bottlenecks in the flow of data around the chip. To break through these information infarctions, Gopalakrishnan’s team came up with a “customized” data flow system. The data flow system is a network scheme that speeds the movement of data from one processing engine to the next. It is customized according to whether  it’s handling learning or inference and for the different scales of precision.

 

The second innovation was the use of a specially designed “scratch pad” form of on-chip memory instead of the traditional cache memory found on a CPU or GPU. Caches are built to obey certain rules that make sense for general computing but cause delays in deep learning. For example, there are certain situations where a cache would push a chunk of data out to the computer’s main memory (evict it), but if that data’s needed as part of the neural network’s inferencing or learning process, the system will then have to wait until it can be retrieved from main memory.

 

A scratch pad doesn’t follow the same rules. Instead, it’s built to keep data flowing through the chip’s processing engines, making sure the data is at the right spot at just the right time. To get to 90 percent utilization, IBM had to design the scratch pad with a huge read/write bandwidth, 192 gigabytes per second.

 

The resulting chip can perform all three of today’s main flavors of deep learning AI: convolutional neural networks (CNN), multilayer perceptrons (MLP), and long short-term memory (LSTM). Together these techniques dominate speech, vision, and natural language processing, explains Gopalakrishnan. At 16-bit—typical for training—precision, IBM’s new chip cranks through 1.5 trillion floating point operations per second; at 2-bit precision—best for inference—that leaps to 12 trillion operations per second.

 

Gopalakrishnan points out that because the chip is made using an advanced silicon CMOS manufacturing process (GlobalFoundries’ 14-nanometer process), all those operations per second are packed into a pretty small area. For inferencing a CNN, the chip can perform an average of 1.33 trillion operations per second per square millimeter. That figure is important “because in a lot of applications you are cost constrained by size,” he says.

 

The new architecture also proves something IBM researchers have been exploring for a few years: Inference at really low precision doesn’t work well if the neural nets are trained at much higher precision. “As you go below 8 bits, training and inference start to directly impact each other,” says Gopalakrishnan. A neural net trained at 16 bits but deployed as a 1-bit system will result in unacceptably large errors, he says. So, the best results come from training a network at a similar precision to how it will ultimately be executed.

 

 

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System in July 2019

Intel which has been developing  semiconductors that mimic the way human brains work, has introduced  a new product dubbed Pohoiki Beach in July 2019. With this product  Pohoiki Beach – an 8-million-neuron, neuromorphic system using 64 Loihi research chips , Intel is extending AI into the areas that work similar to human cognition including interpretation and autonomous adaptation.

 

Intel pointed to self-driving vehicles as one example where this new AI chip would be necessary. As it stands, the semiconductors used in autonomous cars can navigate along a GPS route and control the speed of the vehicle. The AI chips enable the vehicle to recognize and respond to their surroundings and avoid crashes with say a pedestrian. “This is critical to overcoming the so-called ‘brittleness’ of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding,” Intel wrote in a research report. “Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities.”

 

But in order to advance self-driving cars, the systems need to add the experiences that humans gain when driving such as how to deal with an aggressive driver or stop when a ball flies out into the street. “The decision making in such scenarios depends on the perception and understanding of the environment to predict future events in order to decide on the correct course of action. The perception and understanding tasks need to be aware of the uncertainty inherent in such tasks,” researchers at Intel wrote.

 

According to the Santa Clara, California semiconductor marker, with this new approach to computer processing, its new chips can work as much as 1,000 times faster and 10,000 times more efficiently when compared to the current central processing units or CPUs for artificial intelligence workloads. The Pohoiki Beach chip is made up of 64 smaller chips known as Loihi which when combined can act as if it is 8.3 million neurons, which according to one report is the same as the brain of small rodent. A human brain has nearly 100 billion neurons.

 

“Researchers can now efficiently scale up novel neural-inspired algorithms – such as sparse coding, simultaneous localization and mapping (SLAM), and path planning – that can learn and adapt based on data inputs. Pohoiki Beach represents a major milestone in Intel’s neuromorphic research, laying the foundation for Intel Labs to scale the architecture to 100 million neurons later this year,” according to the official announcement.

 

Intel said the new chip can be particularly useful in the processing for image recognition, autonomous vehicles, and robots that are automated. The chip is free for developers focused on neuromorphic, including its more than sixty partners in the community. The aim is to commercialize the technology down the road.

 

Loihi is Intel’s fifth-generation neuromorphic chip. It packs 128 cores – each of which has a built-in learning module – and a total of around 131,000 computational “neurons” that communicate with one another, allowing the chip to understand stimuli. The new system, Pohoiki Springs, contains over 100 million of those computational neurons. It consists of 768 Loihi chips, mounted on Intel Nahuku boards in a chassis that Intel describes as “the size of five standard servers,” and a row of Arria10 FPGA boards. By contrast, Kapoho Bay, Intel’s smallest neuromorphic device, consists of just two Loihi chips with 262,000 neurons.

 

Intel says scaling from a single-Loihi to 64 of them was more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning. The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step,” said Mike Davies, director of neuromorphic research at Intel, who is quoted in a IEEE Spectrum report on the new system.

 

Loihi is Intel’s fifth-generation neuromorphic chip. It packs 128 cores – each of which has a built-in learning module – and a total of around 131,000 computational “neurons” that communicate with one another, allowing the chip to understand stimuli. The new system, Pohoiki Springs, contains over 100 million of those computational neurons. It consists of 768 Loihi chips, mounted on Intel Nahuku boards in a chassis that Intel describes as “the size of five standard servers,” and a row of Arria10 FPGA boards. By contrast, Kapoho Bay, Intel’s smallest neuromorphic device, consists of just two Loihi chips with 262,000 neurons.

 

Intel’s Loihi neuromorphic chip includes digital circuits that mimic the brain’s basic mechanics. Here’s a description from Wikichip: [Loihi uses an asynchronous spiking neural network (SNN) to implement adaptive self-modifying event-driven fine-grained parallel computations used to implement learning and inference with high efficiency. The chip is a 128-neuromorphic cores many-core IC fabricated on Intel’s 14 nm process and features a unique programmable microcode learning engine for on-chip SNN training. The chip was formally presented at the 2018 Neuro Inspired Computational Elements (NICE) workshop in Oregon. The chip is named after the Loihi volcano as a play-on-words – Loihi is an emerging Hawaiian sub

 

Intel says Loihi enables users to process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications like sparse coding, graph search and constraint-satisfaction problems. In conjunction with announcing the new system, Intel called attention to the ongoing Telluride Neuromorphic Cognition Engineering Workshop where researchers are using Loihi systems – “[P]rojects include providing adaptation capabilities to the AMPRO prosthetic leg, object tracking using emerging event-based cameras, automating a foosball table with neuromorphic sensing and control, learning to control a linear inverted pendulum, and inferring tactile input to the electronic skin of an iCub robot,” according to Intel.

 

“With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware,” said Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo. “Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.”

 

“Loihi allowed us to realize a spiking neural network that imitates the brain’s underlying neural representations and behavior. The SLAM solution emerged as a property of the network’s structure. We benchmarked the Loihi-run network and found it to be equally accurate while consuming 100 times less energy than a widely used CPU-run SLAM method for mobile robots,” professor Konstantinos Michmizos of Rutgers University said while describing his lab’s work on SLAM to be presented at the International Conference on Intelligent Robots and Systems (IROS) in November.

 

“Pohoiki Springs scales up our Loihi neuromorphic research chip by more than 750 times, while operating at a power level of under 500 watts,” said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “The system enables our research partners to explore ways to accelerate workloads that run slowly today on conventional architectures, including high-performance computing systems.”

 

A joint effort between Intel Labs and Cornell University put Intel’s neuromorphic research chip, Loihi, to the test by teaching it how to recognize a variety of smells in a chaotic environment. The researchers created a dataset by pumping ten hazardous chemicals (including acetone, ammonia and methane) through a wind tunnel, where a set of 72 chemical sensors collected signals. Then, the researchers leveraged Loihi and a neural algorithm designed to mimic the brain’s olfactory circuits to train the chip to recognize all ten of those hazardous chemicals by smell.

 

Intel reported that the chip quickly learned to understand all ten smells using just a single sample of each and was able to pinpoint them, even when obfuscating factors were introduced. Furthermore, they reported that the chip outperformed a deep learning solution that required thousands of times more training samples per subject. Imam knows that this is just a start, but hopes that the research is a stepping stone to robust, scalable solutions in the future.

Brain-like computer chip developed by Chinese scientists

Jointly developed by scientists from Hangzhou Dianzi University and Zhejiang University, the new chip, named “Darwin” was revealed earlier  after more than a year of research, Xinhua news agency reported. “It can perform intelligent computer tasks by simulating a human brain’s neural networks, in which neurons connect with one another via synapses,” said Ma De from Hangzhou Dianzi University.

 

The black plastic piece is smaller than a dime, but is equipped with 2,048 neurons and four million synapses, two of the fundamental units that make up the human brain. With the new chip, a computer can do more while using less electricity.”It can process ‘fuzzy information’ that conventional computer chips fail to process,” said Shen Juncheng, a scientist from Zhejiang University. For example, it can recognize numbers written by different people, distinguish among different images, and move objects on screen by receiving a user’s brain signals. The chip is expected to be used in robotics, intelligent hardware systems and brain-computer interfaces, but its development is still in the preliminary stage, according to Ma.

 

 Chinese chips could catch up and even stand out in the current Artificial Intelligence (AI) boom, according to a news report published in a US  MIT Technology Review, a magazine founded by the Massachusetts Institute of Technology.In the current wave of enthusiasm for hardware optimized for AI, China’s semiconductor industry sees a unique opportunity to establish itself, said the report .The article cites Chinese chip “Thinker” as an example. Designed to support neural networks, “Thinker” could recognize objects in images and understand human speech. What makes the chip stand out is its ability to “dynamically tailor its computing and memory requirements to meet the needs of the software being run.” This is important since many real-world AI applications—recognizing objects in images or understanding human speech—require a combination of different kinds of neural networks with different numbers of layers.

 

Also remarkable is a mere eight AA batteries are enough to power it for a year. “The chip is just one example of an important trend sweeping China’s tech sector,” said the report. In a three-year action plan to develop AI, published by China’s Ministry of Industry and Information Technology in December 2017, the government laid out a goal of being able to mass-produce neural-network processing chips by 2020.

 

“Compared to how China respond to previous revolutions in information technology, the speed at which China is following the current trend is the fastest,” the review quoted Shouyi Yin, vice director of Tsinghua University’s Institute of Microelectronics as saying. Yin is also the lead author of a paper describing the design behind “Thinker.” The article also listed some difficulties that Chinese chip researchers faced, such as how to commercialize their chip designs, how to scale up, and how to navigate a world of computing being transformed by AI.

 

 

Chinese researchers develop hybrid chip design that holds promise for ‘thinking machines’

Chinese researchers have developed a hybrid chip architecture that could move the world a step closer to achieving artificial general intelligence (AGI) and a future filled with humanlike “thinking machines”. The potential for attaining AGI, also known as “full AI”, by adopting such a general hardware platform was set out by a team of researchers, led by Tsinghua University professor Shi Luping, in a research paper that was published as the cover story of scientific journal Nature in August 2019.

 

Their research presented the case for the Tianjic chip, which was designed by integrating the “computer-science-oriented and neuroscience-oriented” approaches to developing AGI. Tianjic shows that combining those two approaches, which rely on fundamentally different formulations and coding schemes, can enable a single computing platform to run diverse machine-learning algorithms and reconfigurable building blocks, among others.

 

At present, most mainstream AI research has been focused on so-called domain-dependent and problem-specific solutions such as facial recognition and automated trading. By comparison, AGI represents the hopes of building general-purpose systems with intelligence comparable to the human mind.

 

This chip for general AI has the potential of being applied across many industries,” said Shi, a professor at the Centre for Brain-Inspired Computing Research in Beijing’s Tsinghua University, in an interview in August 2019. “Autonomous driving, robotics and automation would be among the fields where this chip can make a difference.” In their featured report on Nature, the Chinese researchers said they designed a self-driving bicycle to evaluate how their attempt at an AGI chip would fare in a road test. The bicycle was equipped with a camera, gyroscope, speedometer, driving and steering motors and a Tianjic chip.

 

China is moving ahead in AI closing its gap with US. The country’s State Council released a road map in July 2017, with the goal of creating a domestic industry worth 1 trillion yuan (US$145 billion) and becoming a global AI powerhouse by 2030.

 

 

 

References and Resources also include:

 

About Rajesh Uppal

Check Also

Software Defined Radio (SDR) technology

A radio is any kind of device that wirelessly transmits or receives signals in the …

One comment

  1. Dead pent subject matter, Really enjoyed looking through.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!