Scientific modeling is the process of creating simplified representations of complex phenomena in order to better understand them. Complex, multi-spatiotemporal, and nonlinear problems are particularly challenging to model because they involve multiple variables, interactions, and feedback loops that can be difficult to predict.
Examples of such complex systems of relevance to the Department of Defense (DoD) span the entire spectrum of length or energy scales – from nanoscale problems in advanced materials modeling, molecular dynamical simulations, and semiconductor device design to terrestrial-scale problems in climate science and earth-system modeling
Hierarchies of intercoupled nonlinear equations that require simultaneous analysis on multiple spatiotemporal scales are common in complex physical systems. However, analyzing these systems can be challenging because they are typically not amenable to analytical techniques, and direct numerical computation can be hampered by the “curse of dimensionality” – the exponential growth of required resources with increasing problem scale.
One approach to addressing this challenge is to use computational methods that can efficiently simulate the behavior of such systems. One such method is known as “multiscale modeling,” which involves creating simplified models that capture the behavior of a system on multiple spatiotemporal scales.
In multiscale modeling, the behavior of a system on a coarse scale is captured by a simplified model, while the behavior on finer scales is captured by more detailed models. These models are then coupled together in a way that allows information to flow between scales. This approach can reduce the computational resources required to analyze the system and can provide insights into the behavior of the system on multiple scales.
The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is sponsoring the NanoWatt Platforms for Sensing, Analysis, and Computation (NaPSAC) program.
In March 2023, DARPA awarded $20 million in funding to six teams to develop in-memory computing architectures. The teams were selected based on their proposals, which outlined their plans for developing novel in-memory computing architectures that could achieve transformative advances in computing accuracy, scalability, and power efficiency.
The NanoWatt Platforms for Sensing, Analysis, and Computation (NaPSAC) program aims to
develop novel in-memory computing architectures capable of transformative advances in
computing accuracy, scalability, and power efficiency.
The goals of the NaPSAC program are to develop, validate, and benchmark in-memory computing engines that exhibit transformative advances in scalability, programming precision, accuracy, and parallelism to enable accurate modeling and simulation of multi-scale and multi-physics phenomena including highly nonlinear hydrodynamic flows, advanced materials modeling, plasma dynamics, and climate science.
Alternate approaches to computing are being explored to overcome the memory-processor bottleneck that is inherent to von Neumann architectures. One notable approach is known as “in-memory” computing, which involves incorporating non-volatile memory elements within the processor.
In-memory computing can be achieved through heterogeneous integration or by using functionalized circuit elements such as memristors. Heterogeneous integration involves integrating memory elements such as phase-change memory or resistive random-access memory (RRAM) with traditional processors. This allows for the data to be stored and processed in the same location, reducing the need for data transfer between the processor and memory, which is the main bottleneck in traditional von Neumann architectures.
Memristors, on the other hand, are circuit elements that can store and process data. They are capable of changing their resistance based on the current flowing through them, which makes them suitable for use in non-volatile memory and in-memory computing applications. Memristors can be used to perform operations such as multiplication, addition, and differentiation, which are the basic building blocks of computing.
Of particular interest to this program are architectures based on programmable nanophotonic and nanomechanical resonator arrays.
Nanophotonic resonators are devices that can confine and manipulate light at the nanoscale, allowing for the creation of highly compact and efficient optical devices. Nanomechanical resonators, on the other hand, are devices that can vibrate at high frequencies in response to external stimuli, making them useful for sensing applications.
Programmable nanophotonic and nanomechanical resonator arrays are arrays of such devices that can be controlled and manipulated in real-time, allowing for the creation of complex, dynamic systems. For example, arrays of nanophotonic resonators can be used to create optical circuits that can perform complex operations such as filtering, modulation, and switching, while arrays of nanomechanical resonators can be used for sensing applications such as mass sensing, force sensing, and chemical sensing.
In May 2023, DARPA released a white paper that provides an overview of the NaPSAC program.
The white paper discusses the program’s goals, objectives, and requirements. It also discusses the challenges that need to be addressed in order to achieve these goals.
Here are some of the key features of the NaPSAC program:
Goals: The NaPSAC program aims to develop novel in-memory computing architectures that achieve transformative advances in computing accuracy, scalability, and power efficiency.
Objectives: The NaPSAC program has four specific objectives:
Develop in-memory computing engines that exhibit transformative advances in scalability, programming precision, accuracy, and parallelism.
Enable accurate modeling and simulation of multi-scale and multi-physics phenomena, including highly nonlinear hydrodynamic flows, advanced materials modeling, plasma dynamics, and climate science.
Demonstrate the feasibility of using in-memory computing for real-time applications, such as sensor fusion and decision making.
Develop new programming tools and compilers that will make it easier to develop applications for in-memory computing platforms.
Requirements: The NaPSAC program has several requirements that must be met by the in-memory computing architectures that are developed under the program. These requirements include:
The architectures must be scalable to large numbers of computing elements.
The architectures must be very power efficient.
The architectures must be programmable.
The architectures must be able to accurately model and simulate multi-scale and multi-physics phenomena.
The architectures must be able to be used for real-time applications.
The white paper also discusses the challenges that need to be addressed in order to achieve the NaPSAC program’s goals. These challenges include:
The development of new materials and devices that can be used to build in-memory computing architectures.
The development of new algorithms and programming techniques that can be used to exploit the capabilities of in-memory computing architectures.
The development of new tools and compilers that can make it easier to develop applications for in-memory computing platforms.
The white paper concludes by discussing the potential impact of the NaPSAC program. The program has the potential to revolutionize the field of scientific computing. It could lead to new capabilities for modeling, simulation, and analysis of complex systems. This could have a major impact on a wide range of fields, including defense, energy, and healthcare.
In March 2023, DARPA awarded $20 million in funding to six teams to develop in-memory computing architectures.
In March 2023, DARPA awarded $20 million in funding to six teams to develop in-memory computing architectures. The teams were selected based on their proposals, which outlined their plans for developing novel in-memory computing architectures that could achieve transformative advances in computing accuracy, scalability, and power efficiency.
The six teams that received funding are:
- Caltech: Led by Professor David Awschalom, the Caltech team is developing an in-memory computing architecture based on nanoscale magnetic devices.
- Cornell University: Led by Professor Paul McEuen, the Cornell team is developing an in-memory computing architecture based on nanoscale mechanical resonators.
- Harvard University: Led by Professor Robert Langer, the Harvard team is developing an in-memory computing architecture based on nanoscale polymer networks.
- MIT: Led by Professor Michael Strano, the MIT team is developing an in-memory computing architecture based on nanoscale carbon nanotubes.
- NJIT: Led by Professor Krishna Saraswat, the NJIT team is developing an in-memory computing architecture based on nanoscale transistors.
- University of California, Berkeley: Led by Professor Jennifer Luff, the UC Berkeley team is developing an in-memory computing architecture based on nanoscale photonic devices.
The teams are currently working on developing their in-memory computing architectures. They are expected to make significant progress in the next year, and DARPA is excited to see what they achieve.
Here are some of the key features of the in-memory computing architectures that are being developed by the six teams:
- Scalability: The architectures are designed to be scalable to large numbers of computing elements. This will allow them to be used to solve complex problems that are currently beyond the reach of traditional computing architectures.
- Power efficiency: The architectures are designed to be very power efficient. This will allow them to be used in portable devices and other applications where power consumption is a major concern.
- Programmability: The architectures are designed to be programmable. This will allow them to be used to solve a wide range of problems.
The in-memory computing architectures that are being developed by the six teams have the potential to revolutionize the field of scientific computing. They could lead to new capabilities for modeling, simulation, and analysis of complex systems. This could have a major impact on a wide range of fields, including defense, energy, and healthcare.
The NaPSAC program has made significant progress in the past year. Several teams have developed promising in-memory computing architectures, and DARPA is currently working with these teams to refine their designs and demonstrate their capabilities. The program is also working to develop new programming tools and compilers that will make it easier to develop applications for in-memory computing platforms.
The NaPSAC program is still in the early stages, but it has the potential to revolutionize the field of scientific computing. The program’s advances in in-memory computing could lead to new capabilities for modeling, simulation, and analysis of complex systems. This could have a major impact on a wide range of fields, including defense, energy, and healthcare.