DoD is exploring opportunities to incorporate autonomy, AI, and human-machine teaming into its weapons and operations. Whether as data-mining tools for intelligence analysts, decision aids for planners, or enablers for autonomous vehicle operations, these systems have the
potential to provide more accuracy, speed, and agility than traditional tools. Yet operational AI and cognitive autonomy systems also face steep challenges in operator trust and acceptance, computational efficiency, verification and validation (V&V), robustness and resilience to
adversarial attack, and human-machine interface and decision explainability.
Modelling and Simulation (M&S) is a key enabler for the delivery of capabilities to Military in the domains of training, analysis and decision-making. Modeling and simulation (M&S) is the use of models as a basis for simulations to develop data utilized for managerial or technical decision making. Modelling is the process of representing a model (e.g., physical, mathematical, or logical representation of a system, entity, phenomenon, or process) which includes its construction and working. This model is similar to a real system, which helps the analyst predict the effect of changes to the system. Simulation of a system is the operation of a model in terms of time or space, which helps analyze the performance of an existing or a proposed system.
Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as a function of design variables. State-of-the-art system modeling often requires executing slow, highly complex mathematical functions in order to simulate system sub-components with total accuracy. Simulating an entire system requires all sub-component models to take inputs and produce accurate outputs. While comprehensive full-system simulation can provide key insights into the design process that mitigate risk and aid in fault detection, it is often impractical because of slow execution speeds.
For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the airflow around the wing for different shape variables (length, curvature, material, ..). For many real-world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such as design optimization, design space exploration, sensitivity analysis and what-if analysis become impossible since they require thousands or even millions of simulation evaluations.
One way of alleviating this burden is by constructing approximation models, known as surrogate models, response surface models, metamodels or emulators, that mimic the behavior of the simulation model as closely as possible while being computationally cheap(er) to evaluate.
DARPA launched DITTO program in Oct 2020 to generate approximate (Surrogate ) models automatically using AI/ML and enable rapid full-system simulation. The DARPA Ditto program develops an automated software framework that will train Machine Learning models. The interesting thing about this program is that it simulates Integrated circuits (ICs), mixed-signal circuits boards, and network distributed systems to optimize the framework.
Surrogate model
A surrogate model is an engineering method used when an outcome of interest cannot be easily directly measured, so a model of the outcome is used instead Surrogate models are constructed using a data-driven, bottom-up approach. The exact, inner working of the simulation code is not assumed to be known (or even understood), solely the input-output behavior is important. A model is constructed based on modeling the response of the simulator to a limited number of intelligently chosen data points. This approach is also known as behavioral modeling or black-box modeling. When only a single design variable is involved, the process is known as curve fitting.
Though using surrogate models in lieu of experiments and simulations in engineering design is more common, surrogate modeling may be used in many other areas of science where there are expensive experiments and/or function evaluations. Popular surrogate modeling approaches are: polynomial response surfaces; kriging; gradient-enhanced kriging (GEK); radial basis function; support vector machines; space mapping; artificial neural networks and Bayesian networks. Further methods recently explored are Fourier surrogate modeling and random forests .
An important distinction can be made between two different applications of surrogate models: design optimization and design space approximation (also known as emulation). In surrogate model based optimization, an initial surrogate is constructed using some of the available budgets of expensive experiments and/or simulations. The remaining experiments/simulations are run for designs which the surrogate model predicts may have promising performance. In design space approximation, one is not interested in finding the optimal parameter vector but rather in the global behavior of the system. Here the surrogate is tuned to mimic the underlying model as closely as needed over the complete design space. Such surrogates are a useful, cheap way to gain insight into the global behavior of the system.
AI/ML for surrogate modeling
Polynomials and artificial neural networks are widely used for surrogate modeling. Surrogate model in machine learning are created by training a linear regression or decision tree on the original inputs and predictions of a complex model. These models are very helpful in explaining nonlinear and non-monotonic models. This is same as the surrogate functions used in optimizations problems.
Surrogate modeling assisted by a neural network (NN) also suffers from high computational costs when applied to a large-scale problem with many quantities of interest (QoIs). To approximate a complex model with many outputs, a complicated NN with many wide hidden layers is usually needed to capture the complex relationship between the model inputs and outputs because each spatial and temporal output variable is driven by different meteorological forcing, for example in case of earth model such as air temperature, humidity, wind speed, precipitation, and radiation. The full connections between nodes in the input layer and the first hidden layer, between nodes of the hidden layers, and between nodes in the last hidden layer and a large number of nodes on the output layer involve a large number of NN weights and biases that need to be solved.
DARPA launches Intelligent Auto-Generation and Composition of Surrogate Models project (Ditto)
Modern machine learning (ML) algorithms have proven to be excellent function approximators (i.e. mathematical stand-ins for real-world functions, operating within some acceptable margin of error when trained on sufficiently representative data), but suffer from two key drawbacks: lack of meta-cognition, and lack of composability.
Composability is defined as an ability for different ML surrogates to aggregate – to collapse and expand into different levels of hierarchy, while maintaining acceptable accuracy and input/output coverage for all the sub-components. Meta-cognition is defined as an ability for different ML surrogates to maintain a meta-awareness of the real-world components they represent (beyond just through training data), and to have some knowledge about how their representative components exist and interact with the overall system structure.
Current state-of-the-art ML algorithms store and incorporate new information solely by training on additional data – these models have no knowledge of what real-world functions or systems they represent. Furthermore, these ML models are typically trained and deployed in isolation. The inability of these models to aggregate (i.e. collapse or expand into varying levels of hierarchy) limits the capabilities of modern ML solutions.
The US Defense Advanced Research Projects Agency (DARPA) has issued a solicitation for industry partners in its : Intelligent Auto-Generation and Composition of Surrogate Models project (Ditto, for short) project, aimed at improving the abilities of machine learning and artificial intelligence. The agency’s researchers are hoping to make machine learning faster and more accurate by introducing hierarchical ‘thinking’ into AI and machine learning to overcome current faults in AI/ML architecture, which bypass real-world functionality and are typically trained in isolation.
A statement from DARPA said: “Today’s system modeling technology can be slow, cumbersome, and not always accurate. The DARPA Ditto program seeks to develop an AI framework that can learn to generate surrogate models for different components of a complex system intelligently, aggregate these models while maintaining and communicating surrogate accuracy and coverage, and then integrate these models into one design.”
Ditto Program
The program will focus on simulating integrated circuits (ICs), mixed-signal circuit boards, and networked-distributed systems. The framework should optimize iteratively so it can adapt continuously as it is exposed to more designs, and learn from past mistakes. Ditto frameworks will address one of three different system design types to explore: integrated circuits (ICs), mixed-signal circuit boards, or networked distributed systems.
The project first will develop a bare-bones framework that demonstrates functional capabilities by applying a wide variety of third-wave AI techniques to generate surrogate models automatically and enable rapid full-system simulation. Then the project will develop a proof-of-concept framework with meaningful performance gains in a full-system simulation. The entire Ditto project should be worth about $1 million.
Ditto plans to develop an automated software framework that can recognise microelectronics system design and train machine learning tools that take account of subsystem components, organising them hierarchically, to allow engineers to spot faults in AI earlier and make decisions earlier. The core purpose is to mitigate risk in critical military applications.
The Ditto program will explore novel third-wave AI solutions to this problem through the lens of microelectronic system simulation. If successful, the Ditto program will result in a comprehensive, automated software framework that can take in a microelectronic system design, train effective ML surrogate models of sub-system components (which incorporate some knowledge about the real-world component they represent), and can integrate these ML models in a way that allows them to expand/collapse into appropriate levels of hierarchy while maintaining acceptable levels of accuracy and coverage.
This framework will not only represent an advance of the field of AI, but will also result in faster (and therefore more frequent) comprehensive full-system simulation that will enable microelectronic engineers to make more informed decisions earlier in the design
process, result in earlier fault detection, improve corner case testing, and mitigate field-able risk for critical system applications.
Modern system simulation often consists of design partitions that fall into one of three categories: (1) Design Under Test (DUT) – the original, new functionality being added to the system, (2) internal pre-existing design components which interact with the DUT, and (3) external design components, which interact with the system but exist outside of it. The DUT must be represented at the level of abstraction that is not only intended for simulation, but also for implementation. As such, the DUT will always require highly detailed models during simulation. However, internal and external non-DUT components present an opportunity for speed-up. Intelligently generating, composing, and integrating surrogate models – faster, but less detailed models – of internal and external non-DUT components could provide significant full-system simulation speed-up, but this process is currently impractical. Manually developing and validating these surrogate models is a
prohibitively labor-intensive process.
Modern machine learning techniques present a clear solution to the problem of building loweraccuracy, higher efficiency sub-component simulation models. Machine learning algorithms are proven to effectively model mathematical functions (within some margin of error) when trained on sufficiently representative data. It is natural that these algorithms could act as stand-ins for more complex mathematical models of system sub-components, if provided input/output training data from its representative sub-component. However, these sub-components do not operate in isolation within the system – they interact with one another, and have different roles within the larger system context.
This reality of full-system modeling presents unique challenges: state-of-the-art ML algorithms are often deployed isolation (rather than working harmoniously within a larger system), and these algorithms do not have any meta-awareness embedded into their architecture of what functionality they represent within the larger system context. ML algorithms that are able to compose themselves into a larger system while maintaining acceptable accuracy – all while retaining information about their unique role within the system – is the focus of this research.
The DARPA Ditto program will develop novel AI approaches and architectures to build a framework that can learn to generate effective, efficient surrogate models for different components of a complex system intelligently, aggregate these models effectively while maintaining and communicating surrogate accuracy and coverage, and then integrate these models into a single design.
System Types
Ditto frameworks will address one of three different system design types to explore: integrated circuits (ICs), mixed-signal printed circuit boards (PCBs), or networked distributed systems (NDS). The framework should take in a system design (with DUT and non-DUT components annotated), and corresponding stimulus and response data traffic for effective surrogate training. Traffic and components can additionally be annotated to provide “meta-data” about the component, which should be stored and integrated into the surrogate model – beyond additional training data – to achieve meta-cognition.
Proposers should choose one of the following system types for their Ditto architectures to address.
1) Integrated Circuits (ICs)
In order to construct a meaningful simulation, the DUT has to be tested in the system context where it interacts with both 1) pre-existing design components (CPUs, memories, peripherals) and 2) external components that both send/receive data and control traffic in/out of the design from outside the system (i.e. protocol engines). The most common approach is to simulate all the components as Register-Transfer Level (RTL), resulting in the overall performance of ~10 cycles per second (cps).
Some components can be replaced with manually developed models, at higher levels of abstraction. Specifically, fast and abstract models for CPUs, memories, some common peripherals and protocol engines capable of 100K to 5M cps already exist (see Figure A). However, these fast models typically represent a small portion of the overall simulation, whose performance is still dictated by the slow DUT and RTL-based internal and external design components for which fast models do not exist. These unnecessarily slow models that still exist in the system will be the target for Ditto architectures.
2) Mixed Signal Printed Circuit Boards (PCBs)
Modern PCB designs consist of a board which contains components that fall into three categories:
(1) DUT (Design Under Test) – the original, digital or analog IC being integrated onto the board, (2) pre-existing digital and analog ICs, which make up the rest of the system and interact with the DUT and (3) discrete components – resistors, capacitors, and other small, passive devices that do not have to be present in the simulation. It is important to note that the objective of mixed-signal PCB simulation is not to verify the functionality of the DUT in isolation (this is typically done prior to any full-system simulation), but rather to verify that the entire PCB (with the DUT integrated) functions correctly.
The DUT is a manufactured IC that may be represented by models at the register-transfer, gate or transistor level. The other components on the board may have simulation models available, but these are typically very detailed and slow. Digital ICs may have gate-level models that alone would execute <1 cps, and analog ICs may have transistor-level models that are more than an order of magnitude slower than that.
In order to construct a meaningful mixed-signal PCB simulation, the DUT has to be tested in the system context where it interacts with other digital and analog components on the board. Because of this, the slowest models will determine the overall simulation speed. Thus, a presence of a single transistor-level model would dictate use of an analog simulator and drive the simulation performance to unacceptable levels for meaningful multi-component testing. This represents the barrier to board-level simulation that has been in place for decades – an issue that will be addressed by Ditto architectures.
3) Networked Distributed Systems (NDS)
Modern NDSs are found in numerous applications, and are all structurally similar – 10s, 100s, or 1000s of PCBs connected by a network. For example, an autonomous ground fighting vehicle can have over 200 Electronic Control Units (ECUs), and a large transport aircraft can have over 2,000 Line Replacement Units (LRUs). These components that make up an NDS have different names depending on different industries, but here they will be generically referred to as ECUs (defined as PCBs that contain a processor, memory, network access, and a sensor/actuator interface that interacts with physical objects).
DARPA Awards
Julia Computing has been awarded funding by the US Defense Advanced Research Projects Agency (DARPA) to accelerate the simulation of analog and mixed-signal circuit models using AI & ML, in March 2021
Last year, the Ditto program is launched by DARPA to explore the novel third-wave AI solutions through the lens of microelectronic system simulation. DARPA stated that the effort seeks to develop an automated software framework that can take in a microelectronic system design, train effective ML surrogate models of sub-system components, and simulate these designs 1000x faster while maintaining the acceptable levels of accuracy. Replying to this comment, Keno Fischer, who is the project PI and CTO at Julia Computing said, “Julia’s performance and differentiable programming capabilities give us a unique advantage in creating novel tools for modelling and simulation.”
He added, “Using newly developed surrogate architectures, such as our Continuous Time Echo State Network (CTESN) architecture, we have already been able to demonstrate acceleration in excess of 100x by employing these techniques in multi-physics simulations and are excited to bring this technology to the electronics simulation space.”
The company is partnering with Boston-based quantum computing startup QuEra Computing to demonstrate these novel capabilities for simulations of the control electronics of QuEra’s neutral atom quantum computers. Julia is one of the high-performance languages of choice for data science, artificial intelligence, and modelling and simulation applications. According to sources, the sophisticated designs of the Boston-based quantum computing startup stretch the boundaries of traditional simulation tooling, making a significant acceleration in simulation performance all the more crucial. Julia Computing intends to make these capabilities available to the larger industry in the near future.
University of Massachusetts Awarded DITTO Contract to Improve AI in March 2021
The Defense Advanced Research Projects Agency (DARPA) has awarded the University of Massachusetts Amherst Biologically Inspired Neural & Dynamical Systems (BINDS) Laboratory, the DARPA DITTO – Intelligent Auto-Generation and Composition of Surrogate Models project. This is one of the agency’s AI Explorations. UMass’s co-PI on this award is Lockheed Martin Advanced Technology Laboratories. DITTO aims at developing an AI machine learning framework that can speedily simulate a complex system by automatically generating surrogate models for system’s component and integrating them into one design. The UMass-LM team seeks to design a machine learning framework with their Modular Knowledgeable AI (MOKA) system.
MOKA provides a large leap in AI by incorporating the meta-cognition of all available knowledge at a level of accuracy not previously possible, and the design of an original neural compiler that aggregates models effectively into a modular system working accurately locally and as a single super-intelligent system globally. Hava Siegelmann, director of the UMass Amherst BINDS lab, said, “Meta-cognition is the ability of the human mind to leverage knowledge about the self in relation to a given task. Our proposed MOKA system will incorporate knowledge about self, its inputs, and other components it may interface with, already starting at the neural architecture. This will lead to computing that is informed of itself and its environment. This capability will vastly reduce the reliance and time of training and also greatly improve capabilities and accuracy.”
“This is an exciting opportunity for Lockheed Martin Advanced Technology Laboratories to work with the BINDS lab”, said Janet Wedgwood, Lockheed Martin lead engineer on the DITTO project, “We are combining our vast experience in Integrated Circuits design and testing with the top level of the University’s machine learning neural networks to propose an automated proof-of-concept software framework for fast and accurate testing of new and updated designs.”
The complexity and time-consuming nature of state-of-the-art hardware simulations have a significant impact on the cost and schedule of system development. Design flaws can translate into a loss of millions to billions of dollars in funding to fix and prevent dangerous outcomes. The MOKA system reduces those costs by incorporating knowledge at the neural architecture, keeping a robust awareness about itself and its environment, a capability that reduces the need for training while greatly improves accuracy.
SRI International receives contract on Ditto Project under DARPA AI Exploration Program, in March 2021
The Ditto project seeks to use microelectronic system simulation to explore third-wave artificial intelligence (AI) solutions that incorporate real-world knowledge into modern machine learning (ML) functions and systems. Ditto seeks to build an AI framework that can learn to intelligently generate effective surrogate models for different components of a complex system, maintain and communicate surrogate accuracy and coverage, and aggregate these models into a single design.
Under the Ditto project, SRI will apply its Deep Adaptive Semantic Logic (DASL) technology to speed up the verification process for integrated circuit (IC) design. DASL’s unified reasoning and learning system provides unprecedented ability to engage with and train AI by combining formal knowledge with neural networks. This integration of bottom-up data-driven modeling with top-down knowledge-based reasoning maximizes productivity by accelerating machine learning and reducing data requirements. DASL has achieved state-of-the-art performance on image processing tasks using less than 4% of the data required by competing techniques.
SRI’s Artificial Intelligence Center (AIC) is collaborating with the Center for Vision Technologies (CVT), which has a rich history in circuit design. Working together, they will apply CVT’s test vectors and AIC’s DASL to expedite IC verification processes. Reducing the amount of data needed for learning and circuit emulation will allow circuit designers to identify errors and make corrections faster, accelerating and expediting the IC design process. The program is currently in its first phase.