Trending News
Home / Technology / AI & IT / Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.

Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.

Artificial Intelligence technologies aim to develop computers, or robots that match or exceed the abilities of human intelligence in tasks such as learning and adaptation, reasoning and planning, decision-making and autonomy; creativity; extracting knowledge, and making predictions from data.

 

Artificial neural networks, a key form of AI, can ‘learn’ and perform complex operations with wide applications to computer vision, natural language processing, facial recognition, speech translation, playing strategy games, medical diagnosis and many other areas. Inspired by the biological structure of the brain’s visual cortex system, artificial neural networks extract key features of raw data to predict properties and behaviour with unprecedented accuracy and simplicity.

 

Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. Deep learning is useful for many applications such as object recognition, face detection, speech recognition, computer vision, natural language processing and many other tasks. However, both training and execution of large-scale DNNs require vast computing resources, leading to high power requirements and communication overhead.

There is a wide variety of hardware used for AI development, including GPU, TPU, FPGA, and neuromorphic computers. Neuromorphic computing is a type of computing that is inspired by the structure and function of the human brain, it uses special processors and memory architectures to mimic the way the human brain works and it aims to achieve more efficient and low-power AI.

 

The basic building block of a neuromorphic computer is a so-called neuron, a hardware component that communicates with other neurons via spikes of some type of signal. IBM, for example, unveiled a neuromorphic computing chip called Loihi in 2017 that uses spikes in electric signals.

 

Photonic chips are a type of hardware that use light instead of electrical signals to perform computations. They have the potential to greatly accelerate artificial intelligence (AI) applications because light can transmit information much faster than electricity, which leads to faster processing times.

 

Neuromorphic photonics processors are a type of photonic chip that are specifically designed to mimic the way the human brain works, which allows for more efficient and effective AI processing. A photonics-based neuromorphic computer would encode information in spikes of light intensity, transmitted between so-called optical neurons.

 

Photonic processing have the potential to revolutionize the speed, energy efficiency and throughput of modern computing. Photonic Integrated Circuits can pack orders of magnitude more information in every square centimeter. One reason is that photonic signals operate much faster, thereby shuffling much more data through the system per second. Another is that lightwaves exhibit the superposition property, which allows for optical multiplexing: waveguides can carry many signals along different wavelengths or time slots simultaneously without taking up additional space. This combination enables an enormous amount of information—easily more than one terabyte per second—to flow through a waveguide only half a micron wide.

 

Therefore  photonic interconnects are slowly replacing electrical wires as communication bottlenecks worsen. A modern silicon-photonic link can transmit a photonic signal using only femtojoules of energy per bit of information, whereas thousands of femto joules of energy are consumed per operation in even the most efficient digital electronic processors, including IBM’s TrueNorth cognitive computing chip and Google’s tensor processing unit.

Neuromorphic Photonics, Principles of | SpringerLink

A key step for neuromorphic computers is creating components analogous to the brain’s network of neurons and the connections between them called synapses.  Photonics appears to be an ideal technology with which to implement neural networks. One motivation to build photonics-based neuromorphic computers is that they can execute neural networks, a basic machine-learning algorithm, much more quickly than electronics-based computers. Photonics-based chips can perform key components of these algorithms on a nanosecond time scale rather than electronics’ millisecond time scale.

 

Secondly, the greatest computational burden in neural networks lies in the interconnectivity: in a system with N neurons, if every neuron can communicate with every other neuron (plus itself), there will be N2 connections. Just one more neuron adds N more connections—a prohibitive situation if N is large. Photonic systems can address this problem in two ways: waveguides can boost interconnectivity by carrying many signals at the same time through optical multiplexing; and low-energy, photonic operations can reduce the computational burden of performing linear functions such as weighted addition. For example, by associating each node with a color of light, a network could support N additional connections without necessarily adding any physical wires.

Neuromorphic Photonics | Optics & Photonics News

An emerging field at the nexus of photonics and neuroscience, neuromorphic photonics combines the advantages of optics and electronics to build systems with high efficiency, high interconnectivity and high information density.

Neuromorphic photonics implementation

One such photonic neural model, currently under investigation in Princeton University , involves engineering dynamical lasers to resemble the biological behavior of neurons. Laser neurons, operating optoelectronically, can operate at approximately 100 million times the speed of their biological counterparts, which are rate-limited by biochemical  interactions. These lasers represent neural spikes via optical pulses by operating under a dynamical regime called excitability. Excitability is a behavior in feedback systems in which small inputs that exceed some threshold cause a major excursion from equilibrium—which, in the case of a laser neuron, releases an optical pulse. This event is followed by a recovery back to equilibrium, the so-called refractory period.

 

In March 2019,  Optalysys announced the FT:X 2000, the world’s first optical co-processor system for Ai computing. It is a really exciting time in optical computing,” said Dr. Nick New, Optalysys CEO and Founder. “As we approach the commercial launch of our main optical co-processor systems, we are seeing a surge in interest in optical methods, which are needed to provide the next level of processing capability across multiple industry sectors. We are on the verge of an optical computing revolution and it’s fantastic to be leading the way.”

 

In early 2019, Boston-based start-up Lightmatter announced a $22million investment round led by Google Ventures into the development of their photonic chip-based Optical AI technology, following the announcement last year of a $10million fund raise by competing company Lightelligence, led by Chinese giant Baidu. Both companies are focussed towards optical AI methods which calculate the the matrix multiply operations that form the basis of today’s deep learning neural networks, using light, rather than electricity.

 

In 2020, At SPIE Photonics West, Bhavin Shastri of Queen’s University gave an overview of the progress to build a photonics-based neuromorphic computer. Shastri’s group has developed a neuromorphic computing chip, millimeters per side, that is based on integrated silicon photonics and contains hundreds of optical neurons connected via waveguides. These neurons can then be connected to form an optical neural network. The neural network would weigh the inputs of multiple neurons by a set of factors, sum them together, apply a nonlinear activation function, and evaluate whether the final value surpasses a certain threshold before sending the signal to another set of neurons.

 

The chips could accelerate a range of computing tasks. Shastri pointed out its potential applications in high-performance computing, edge computing for image processing, and wireless signal processing. They also could be applied to solve nonlinear optimization problems and partial differential equations.

 

World’s fastest optical neuromorphic processor reported in 2021

An international team of researchers led by Swinburne University of Technology has demonstrated the world’s fastest and most powerful optical neuromorphic processor for artificial intelligence (AI), which operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data. Published in the journal Nature, this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general.

 

Led by Swinburne’s Professor David Moss, Dr Xingyuan (Mike) Xu (Swinburne, Monash University) and Distinguished Professor Arnan Mitchell from RMIT University, the team achieved an exceptional feat in optical neural networks: dramatically accelerating their computing speed and processing power. The team demonstrated an optical neuromorphic processor operating more than 1000 times faster than any previous processor, with the system also processing record-sized ultra-large scale images — enough to achieve full facial image recognition, something that other optical processors have been unable to accomplish.

 

“This breakthrough was achieved with ‘optical micro-combs’, as was our world-record internet data speed reported in May 2020,” says Professor Moss, Director of Swinburne’s Optical Sciences Centre and recently named one of Australia’s top research leaders in physics and mathematics in the field of optics and photonics by The Australian. While state-of-the-art electronic processors such as the Google TPU can operate beyond 100 TeraOPs/s, this is done with tens of thousands of parallel processors. In contrast, the optical system demonstrated by the team uses a single processor and was achieved using a new technique of simultaneously interleaving the data in time, wavelength and spatial dimensions through an integrated micro-comb source. Micro-combs are relatively new devices that act like a rainbow made up of hundreds of high-quality infrared lasers on a single chip. They are much faster, smaller, lighter and cheaper than any other optical source.

 

“In the 10 years since I co-invented them, integrated micro-comb chips have become enormously important and it is truly exciting to see them enabling these huge advances in information communication and processing. Micro-combs offer enormous promise for us to meet the world’s insatiable need for information,” Professor Moss says. “This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware — optical or electronic based — bringing massive-data machine learning for real-time ultrahigh bandwidth data within reach,” says co-lead author of the study, Dr Xu, Swinburne alum and postdoctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.

 

“We’re currently getting a sneak-peak of how the processors of the future will look. It’s really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs,” Dr Xu explains. RMIT’s Professor Mitchell adds, “This technology is applicable to all forms of processing and communications — it will have a huge impact. Long term we hope to realise fully integrated systems on a chip, greatly reducing cost and energy consumption.”

 

“Convolutional neural networks have been central to the artificial intelligence revolution, but existing silicon technology increasingly presents a bottleneck in processing speed and energy efficiency,” says key supporter of the research team, Professor Damien Hicks, from Swinburne and the Walter and Elizabeth Hall Institute. He adds, “This breakthrough shows how a new optical technology makes such networks faster and more efficient and is a profound demonstration of the benefits of cross-disciplinary thinking, in having the inspiration and courage to take an idea from one field and using it to solve a fundamental problem in another.”

 

 

MIT Demos Optical Deep Learning with Nanophotonic Processor

A research team at the Massachusetts Institute of Technology (MIT) has come up with a novel approach to deep learning that uses a nanophotonic processor, which they claim can vastly improve the performance and energy efficiency for processing artificial neural networks. However, instead of general purpose optical computer , the researchers have narrowed the application domain  to deep learning, they have further limited this initial work to inferencing of neural networks, rather than the more computationally demanding process of training.

 

Researchers at MIT  in the journal Nature Photonics, report the development of a nanophotonic processor comprised of cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit. The researchers MIT postdoc Yichen Shen, graduate student Nicholas Harris, professors Marin Soljačić and Dirk Englund, and eight others. demonstrated the utility of the approach with a vowel recognition application.

 

Traditional computer architectures are not very efficient when it comes to the kinds of calculations needed for certain important neural-network tasks. Such tasks typically involve repeated multiplications of matrices, which can be very computationally intensive in conventional CPU or GPU chips. After years of research, the MIT team has come up with a way of performing these operations optically instead. “This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” Soljačić says. “We’ve demonstrated the crucial building blocks but not yet the full system.”

 

The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. Essentially, they are using the interaction between the photons to perform low-level deep learning computations.

 

The new programmable nanophotonic processor, which was developed in the Englund lab by Harris and collaborators, uses an array of waveguides that are interconnected in a way that can be modified as needed, programming that set of beams for a specific computation. “You can program in any matrix operation,” Harris says. The processor guides light through a series of coupled photonic waveguides. The team’s full proposal calls for interleaved layers of devices that apply an operation called a nonlinear activation function, in analogy with the operation of neurons in the brain.

 

To demonstrate the approach, the researchers implemented a simple neural network that would recognize four basic vowel sounds. With this single chip, they were able to achieve a 77 percent accuracy level. That doesn’t quite match the 90 percent accuracy of conventional deep learning system, but the researchers believe they can scale up the platform fairly easily to deliver better results.

 

Englund adds that the programmable nanophotonic processor could have other applications as well, including signal processing for data transmission. “High-speed analog signal processing is something this could manage” faster than other approaches that first convert the signal to digital form, since light is an inherently analog medium. “This approach could do processing directly in the analog domain,” he says. The system could also be a boon for self-driving cars or drones, says Harris, or “whenever you need to do a lot of computation but you don’t have a lot of power or time.”

 

Envise

Envise, a photonic chip created by Lightmatter, was developed exclusively for artificial intelligence. It is used to run neural networks where calculations like adding and multiplying numbers using linear algebra is carried out frequently.

 

The technology was developed over a four-year span at MIT’s Quantum Photonics Laboratory. MIT owns some of the original patents related to the technology, and has licensed them to company to spur further development.

 

Like other programmable optical processors, the Lightmatter chip uses light, rather than electrons, as the basis for its processing. Not only does that circumvent the computational speed limit associated with electronic transistors, it does so using just a fraction of the energy. However, the Lightmatter chip is not a general-purpose processor. Instead its silicon photonic circuitry is built to only perform matrix multiplications – the critical computations used by deep learning applications. Doing matrix vector product  quickly is important for comparing large sets of data with one another, for instance if a voice recognition system wants to see if a certain sound wave is sufficiently similar to “OK Google” to initiate a response.

 

Linear algebra, a very general mathematical tool, lies at the core of deep learning algorithms and is used to model a wide range of real-world occurrences, from space exploration to financial transactions. These rely on the matrix multiplication operation in linear algebra, which fits the analog linearity of photonics well. These additions and multiplications are made possible by integrated photonic components.

 

Instead of breaking that matrix calculation down to a series of basic operations with cascades of logic gates and transistors, Lightmatter’s photonic chips essentially solve the entire problem at once by running a beam of light through a gauntlet of tiny, configurable lenses (if that’s the right word at this scale) and sensors. By creating and tracking tiny changes in the phase or path of the light, the solution is found as fast as the light can get from one end of the chip to the other.

 

The design was tested last summer using a prototype built with an array of 56 programmable Mach–Zehnder interferometers implemented as a silicon photonic integrated circuit. In the demonstration, recorded vocal sounds were used to train a neural network for vowel recognition. The photonic chip was used to decipher vowels based on the trained network. To perform this recognition, the Lightmatter prototype was only moderately accurate (about 75 percent) compared to results on conventional hardware (about 90 percent), but for a first try, it was suitably impressive. To build a more powerful processor and do more extensive testing is the goal of the $11 million investment round, which was led by Matrix Partners and Spark Capital, is pretty much on par for early stage AI startups nowadays, but only a fraction of what will be needed to get a commercial product out the door.

 

A conventional processor is used to host the application, that is, perform the less computationally-demanding part of the application, while offloading the required matrix math to the photonics chip. In that sense, it uses the same host-accelerator paradigm of CPU-GPU platforms. That simplifies the Lightmatter hardware significantly, allowing the design to rely on relatively simple nanophotonic circuits. Given the current immaturity of building nanophotonic structures with CMOS technology, that’s a huge advantage.

 

Applications

Envise photonics computing has been targeted by many industrial sectors. Some examples are:

  • Autonomous driving in the auto industry.
  • Enabling predictive and preventive maintenance at manufacturing plants.
  • Design of control and vision for robotics.
  • Recommendation of a product in e-commerce and advertising.
  • Pharmacy, pathology, and cancer detection in the health sector.
  • Digital signal analysis and algorithms for signal processing.
  • Translation of languages and Text-to-Speech development in language processing

 

There are certain intrinsic limitations in photonic computing that still need further development. Due to the analog rather than digital nature of the calculations carried out by photonic chips like Envise, they may not be as exact as traditional transistors, and system noise may also be an issue. MZIs are an example of a photonic device that is typically larger and cannot yet be crammed onto a chip as tightly as more conventional electronic components.

 

Photonic Neuromorphic Chip based on electro-optic modulators as photonic neurons

Alexander Tait and pals at Princeton University in New Jersey have built an integrated silicon photonic neuromorphic chip and show that it computes at ultrafast speeds. “Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing,” say Tait and co.

 

The authors have reported the first experimental demonstration of an integrated photonic neural network that also makes first use of electro-optic modulators as photonic neurons.The nodes take the form of tiny circular waveguides carved into a silicon substrate in which light can circulate. When released this light then modulates the output of a laser working at threshold, a regime in which small changes in the incoming light have a dramatic impact on the laser’s output.

 

A silicon-compatible photonic neural networking architecture called “broadcast-and-weight” has been proposed.  In this architecture, each node’s output is assigned a unique wavelength carrier that is wavelength division multiplexed (WDM) and broadcast to other nodes. Incoming WDM signals are weighted by reconfigurable, continuous-valued filters called microring (MRR) weight banks  and then summed by total power detection. This electrical weighted sum then modulates the corresponding WDM channel. A nonlinear electro-optic transfer function, such as a laser at threshold or, in this work, a saturated modulator, provides the nonlinearity required for neuron functionality.

 

They go on to demonstrate how this can be done using a network consisting of 49 photonic nodes. They use this photonic neural network to solve the mathematical problem of emulating a certain kind of differential equation and compare it to an ordinary central processing unit. The results show just how fast photonic neural nets can be. “The effective hardware acceleration factor of the photonic neural network is estimated to be 1,960 × in this task,” say Tait and co. That’s a speed up of three orders of magnitude. “Silicon photonic neural networks could represent first forays into a broader class of silicon photonic systems for scalable information processing,” say Taif and co.

Lightmatter, Boston startup Accelerating AI with Photonic Chips

Founded in late 2017, Lightmatter had snagged US$33 million in series A start-up funding by early 2019, which has helped the company build up key staff, develop and refine its product line and ready it for launch. In early May 2021, Lightmatter announced that it has raised another US$80 million in a series B round, through an investment group including Viking Global Investors, GV (formerly Google Ventures), Hewlett Packard Enterprises, Lockheed Martin and others.

 

The cash is intended to support development of future computing hardware based around Mach-Zehnder interferometers. In a blog post at medium.com, co-founder and CEO Nicholas Harris wrote: “This January marks one year since we started – a lot has happened. We carefully assembled a team of 23 world-class scientists and engineers to develop a scalable platform for high-throughput, high-efficiency artificial intelligence computing. “We taped out our first (successful) test chip with transistors and photonic elements from start to finish in four months. Eight months after that, we taped out a chip with over a billion transistors.” The Lightmatter engineering team is aiming to deliver the first photonics-based artificial intelligence accelerator product, and is currently hiring a full-time photonics design engineer at its Boston site.

 

By using light instead of electrical signals, the company says that its combination of photonics, electronics, and algorithms will deliver a new computing architecture offering “orders of magnitude” performance improvements over what would be feasible with the traditional approach of shrinking transistor dimensions with lithography. “It’s worth noting that the end of Moore’s Law isn’t (yet) due to the inability of chip makers to shrink transistors,” explained Harris in his post. “If you’re going to pack more transistors onto the same sized chip, which has been happening for decades, those transistors need to be commensurately more energy efficient; herein lies the problem.”

 

The company has now got three product lines Envise, for the acceleration of AI computing; Passage, for interconnects; and a software stack called Idiom. Envise is our photonic AI accelerator. It’s general-purpose—it’s not just for image recognition or natural language processing; it’s general across AI. It’s very high performance in terms of throughput, and very low in energy consumption. Our goal with this Envise is to really try to help with scaling AI and its minimizing environmental footprint. If you look at chips that are out there right now, Nvidia’s chip draws about 450 watts—that’s an insanely hot computer chip. Our chip is looking at about 80 watts, and is multiple times faster.

 

And then, to use that computer, you need software. Our Idiom software acts as a layer that lives beneath the popular machine-learning frameworks, like TensorFlow and PyTorch. Idiom is also able to automatically detect the configuration of processors—if you add another node, it will know it’s there, and the compiler will generate a program that will run across the cluster.

 

Photonics suits AI

Instead of using conventional computing architecture based around so-called “multiply-accumulate” units, Harris and colleagues plan to employ programmable Mach-Zehnder interferometers. “This photonic device is not bound by the physics that limit transistor-based electronic circuits – opening an avenue towards continuing the currently broken trend of exponential growth in compute per unit area within a practical power envelope,” states the CEO.

 

The idea is that the novel approach will be better suited to the way that artificial intelligence platforms operate. “Current transistor-based technologies are approaching the limits of their fundamental capabilities, and faster and more energy efficient computers will be essential to the continued progress of AI,” claims the company. “The alternative computing platform being developed by Lightmatter will be critical to powering the next generation of AI algorithms.”

 

In a release announcing the latest funding, GV general partner Tyson Clark, who now sits on the startup’s board of directors, added: “Lightmatter is building a next-generation computing platform at the cutting edge of photonics and artificial intelligence, at a time when there is a growing need for new hardware-based approaches to AI acceleration. “We believe the team’s theoretical expertise and engineering talent are clear differentiators in the market for artificial intelligence accelerators.”

 

Neuromorphic chips based on Phase change materials (PCMs)

The research team has made the pioneering breakthrough of the development of photonic computer chips that imitate the way the brain’s synapses operate. The work, conducted by researchers from Oxford, Münster and Exeter universities, combined phase-change materials – commonly found in household items such re-writable optical discs – with specially designed integrated photonic circuits to deliver a biological-like synaptic response. Crucially, their photonic synapses can operate at speeds a thousand times faster than those of the human brain.

 

The PCM’s ability to absorb light changes when heated, which can be used to control the amount of light that passes through the waveguide. In previous research, the group had shown that optical pulses could be used to switch between various states of absorption to store information—effectively creating a photonic memory device. The team believes that the research could pave the way for a new age of computing, where machines work and think in a similar way to the human brain, while at the same time exploiting the speed and power efficiency of photonic systems.

 

Professor C David Wright, co-author from the University of Exeter, said: ‘Electronic computers are relatively slow, and the faster we make them the more power they consume. Conventional computers are also pretty “dumb”, with none of the in-built learning and parallel processing capabilities of the human brain. We tackle both of these issues here – by developing not only new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution.’ Professor Wolfram Pernice, a co-author of the paper from the University of Münster, added: ‘Since synapses outnumber neurons in the brain by around 10,000 to one, any brain-like computer needs to be able to replicate some form of synaptic mimic. That is what we have done here.’

 

POET Technologies Enters Artificial Intelligence Market with Technology Leader in Photonic Computing in 2021

POET Technologies Inc. (“POET” or the “Company”) (TSX Venture: PTK; OTCQX: POETF), the designer and developer of the POET Optical Interposer and Photonic Integrated Circuits (PICs) for the data center and tele-communication markets, today announced that it has entered into development and supply agreements with a technology leader in photonic neural network systems for Artificial Intelligence (AI) applications.

 

Artificial Intelligence, at the cusp of its own revolutionary impact to humanity, is driving an unprecedented demand for computation at the same time that the physics of digital semiconductors, driven by Moore’s law, is reaching its end. Transistor scaling is approaching its limits and AI accelerator companies are struggling to keep pace with demands, particularly in “edge” applications that require greater power and cost efficiency. Domain-specific architectures targeted to AI workloads can make up for some of the slowdown in transistor advances but that approach also has its limits.

 

The chipset market for AI applications is projected to grow from approximately $18 billion in 2020 to over $65 billion by 2025. POET’s new development and supply agreement for photonic AI computing represents an entry point into this new large and extremely high-growth market. POET’s customer for these applications is breaking the digital semiconductor mold by integrating photonics into accelerators for AI workloads, thereby enabling step-change advancements in AI computation. Harnessing light to perform data-parallel calculations is many orders-of-magnitude faster, more power efficient, and lower cost than in traditional semiconductors. Photonic computing changes the game in the field of Artificial Intelligence.

 

“Photonics has been readied for optical computing as a result of over a decade of advancements in photonics design and fabrication driven by telecommunications and data communication and promises to be the technology to usher in the next era of rapid growth for AI computing,” commented Suresh Venkatesan, the Company’s Chairman & CEO. “POET is now well positioned to participate meaningfully for a new class of high-volume, high-growth applications, expanding the addressable markets for our Optical Engines and Optical Interposer platform products. In addition to highlighting the tremendous adaptability of the POET Optical Interposer platform, this project is anticipated to result in revenue for POET this year in the form of NRE and potentially initial product sales.”

 

 

References and Resources also include:

https://www.sciencedaily.com/releases/2021/01/210107112418.htm

https://www.globenewswire.com/news-release/2021/01/06/2154184/0/en/POET-Technologies-Enters-Artificial-Intelligence-Market-with-Technology-Leader-in-Photonic-Computing.html

 

 

Cite This Article

 
International Defense Security & Technology (March 25, 2023) Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.. Retrieved from https://idstch.com/technology/ict/accelerating-ai-with-photonic-chips-neuromorphic-photonics-processors-enable-high-speed-and-energy-efficient-artificial-intelligence-ai-applications/.
"Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.." International Defense Security & Technology - March 25, 2023, https://idstch.com/technology/ict/accelerating-ai-with-photonic-chips-neuromorphic-photonics-processors-enable-high-speed-and-energy-efficient-artificial-intelligence-ai-applications/
International Defense Security & Technology July 4, 2021 Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.., viewed March 25, 2023,<https://idstch.com/technology/ict/accelerating-ai-with-photonic-chips-neuromorphic-photonics-processors-enable-high-speed-and-energy-efficient-artificial-intelligence-ai-applications/>
International Defense Security & Technology - Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.. [Internet]. [Accessed March 25, 2023]. Available from: https://idstch.com/technology/ict/accelerating-ai-with-photonic-chips-neuromorphic-photonics-processors-enable-high-speed-and-energy-efficient-artificial-intelligence-ai-applications/
"Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.." International Defense Security & Technology - Accessed March 25, 2023. https://idstch.com/technology/ict/accelerating-ai-with-photonic-chips-neuromorphic-photonics-processors-enable-high-speed-and-energy-efficient-artificial-intelligence-ai-applications/
"Accelerating AI with Photonic Chips, Neuromorphic photonics processors enable high speed and energy efficient Artificial Intelligence (AI) applications.." International Defense Security & Technology [Online]. Available: https://idstch.com/technology/ict/accelerating-ai-with-photonic-chips-neuromorphic-photonics-processors-enable-high-speed-and-energy-efficient-artificial-intelligence-ai-applications/. [Accessed: March 25, 2023]

About Rajesh Uppal

Check Also

Clinical Trial Management System (CTMS) automates and manages clinical research studies

Clinical research is medical research involving people. Clinical research is a branch of healthcare science that …

error: Content is protected !!