Home / Technology / AI & IT / Military AI advancing to next generation Robust and Adversarial AI

Military AI advancing to next generation Robust and Adversarial AI

Artificial Intelligence technologies aim to develop computers, or robots that match or exceed the abilities of human intelligence in tasks such as learning and adaptation, reasoning and planning, decision making and autonomy; creativity; extracting knowledge and making predictions from data.

 

Within AI is a large subfield called machine learning or ML. Machine Learning enables computers with capability to learn from data, so that new program need not be written. Machine learning Algorithms extract information from the training data to discover patterns which are then used to make predictions on new data.

 

Machine learning algorithms require large training data sets containing hundreds to thousands of relevant features. Careful selection, and extraction, of these feature sets are required for learning. To solve this challenge, Scientists turned to subfield within the machine learning, called brain-inspired computation which mimics some form or functionality of human brain.

 

Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts as machine learning. For example, symbolic logic – rules engines, expert systems and knowledge graphs – could all be described as AI, and none of them are machine learning.

 

 

AI in Military

Another threat is dangerous arms race of AI-powered weaponry between countries led by US, China, and Russia.

 

AI is enabling many military capabilities and operations such as intelligence, surveillance, and reconnaissance, identifying targets, speed weapon development and optimization, command and control, logistics and developing war games. Adversaries could use AI to carry out information operations or psychological warfare.

 

AI systems can accurately analyze huge amount of data generated during peace and conflicts. It can quickly interpret information, which could lead to better decision making. It can fuse data from different sensors into a coherent common operating picture of the battlefield.

 

The AI systems can react significantly faster than systems that rely on human input; Therefore, AI is accelerating the complete “kill chain” from detection to destruction. This allows militaries to better defend against high-speed weapons such as hypersonic weapons which travel at 5 to 10 times the speed of sound.

 

AI enhances of autonomy of unmanned Air, Ground and Underwater vehicles.  It is enabling concepts like vehicle swarms in which multiple unmanned vehicles autonomously collaborate to achieve a task. For example drone swarms could overwhelm or saturate adversary air defensive systems.

 

Soldiers are forming integrated teams with the unmanned ground and Air vehicles and will carry man-portable swarming munitions. We have cognitive radio and cognitive electronic warfare systems.

 

AI integrated with 5G and Military internet of things (MIOT) are enabling smart bases, soldier healthcare, battlefield awareness, C4ISR and fire-control systems. In fire-control systems, sensor networks with digital analytics will enable fully automated responses to real-time threats and deliver firepower with pinpoint precision and Networked Munitions to track mobile targets or be redirected in flight.

 

Ultimately, AI is emerging as the biggest multiplier in Military and being embedded in every military platform, weapon, Network and system, from Soldiers to the entire Military enterprise and making them smart and Intelligent. For example, AI integrated with 5G and internet of things (IoT) are enabling smart military bases, soldier healthcare, and battlefield awareness.

 

Looking into the future, AI is enabling the next revolution in the military to “intelligentized warfare” in which there will be AI Versus AI, we will have to attack adversary AI systems and protect our own systems.

 

Artificial Intelligence is emerging as the biggest military multiplier in all domains of warfare and being embedded in every system, weapon, platform, Network, and from Soldier to Military enterprise. For example, we have smart sensors,  smart ships, autonomous UAVs, and UCAVs in Air domain, UUWs and USVs in the sea domain, and unmanned tanks to service and combat robots on Land.

 

AI is further divided into two categories: narrow AI and general AI. Narrow AI systems can perform only the specific task that they were trained to perform, while general AI systems would be capable of performing a broad range of tasks, including those for which they were not specifically trained. General AI systems do not yet exist.

 

Third AI wave

DARPA  breaks down AI technology development into three distinct waves. The first wave, “describe,” focused on developing platforms that employed a rules-based system. These AI platforms are the basis of commercial products such as TurboTax. Early work in AI emphasized handcrafted knowledge, and computer scientists constructed so-called expert systems that captured the specialized knowledge of experts in rules that the system could then apply to situations of interest. Such “first wave” AI technologies were quite successful – tax preparation software is a good example of an expert system – but the need to handcraft rules is costly and time-consuming and therefore limits the applicability of rules-based AI.

 

The second wave is “recognize,” and is made up of the machine learning systems that are prevalent today. The past few years have seen an explosion of interest in a sub-field of AI dubbed machine learning that applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples. Foremost among these approaches are deep learning (artificial) neural networks that can be trained to perform a variety of classification and prediction tasks when adequate historical data is available.

 

Such systems can classify objects of interest and take the burden off of human analysts who often must pore through mounds of data and turn it into actionable information. However, while the theory behind these second wave technologies was established in the 1970s, much more work still needs to be done to mature them, he added. Additionally , the task of collecting, labelling, and vetting data on which to train such “second wave” AI techniques is prohibitively costly and time-consuming.

 

The performance of these systems can make them very useful for tasks such as identifying a T-90 main battle tank in a satellite image, identifying high-value targets in a crowd using facial recognition, translating text for open-source intelligence, and text generation for use in information operations. The application areas where AI has been most successful are those where there are large quantities of labelled data, like Imagenet, Google Translate, and text generation. AI is also very capable in areas like recommendation systems, anomaly detection, prediction systems, and competitive games. An AI system in these domains could assist the military with fraud detection in its contracting services, predicting when weapons systems will fail due to maintenance issues, or developing winning strategies in conflict simulations. All of these applications and more can be force multipliers in day-to-day operations and in the next conflict.

 

The third wave, “explain,” is where the future of AI is headed and focuses on adding context and trust to artificial intelligence platforms, Peter Highnam, the agency’s deputy director said. DARPA’s AI Next program has three thrusts: to increase the robustness of second wave AI technologies, to aggressively apply second wave systems to new applications and to further examine third wave technologies, he said.

 

AI  applications include but are not limited to intelligence, surveillance, and reconnaissance; logistics; cyber operations; command and control; and semiautonomous and autonomous vehicles. These technologies are intended in part to augment or replace human operators, freeing them to perform more complex and cognitively demanding work. In addition, AI-enabled systems could (1) react significantly faster than systems that rely on operator input; (2) cope with an exponential increase in the amount of data available for analysis; and (3) enable new concepts of operations, such as swarming (i.e., cooperative behavior in which unmanned vehicles autonomously coordinate to achieve a task) that could confer a warfighting advantage by overwhelming adversary defensive systems, according to US Congress report.

 

AI Challenges

In other areas, AI is very short of human-level achievement. Some of these areas include working with scenarios not seen previously by the AI; understanding the context of text (understanding sarcasm, for example) and objects; and multi-tasking (i.e., being able to solve problems of multiple type). Most AI systems today are trained to do one task, and to do so only under very specific circumstances. Unlike humans, they do not adapt well to new environments and new tasks.

 

As the military looks to incorporate AI’s success in these tasks into its systems, some challenges must be acknowledged. The first is that developers need access to data. Many AI systems are trained using data that has been labeled by some expert system (e.g., labeling scenes that include an air defense battery), usually a human. Large datasets are often labeled by companies that employ manual methods. Obtaining this data and sharing it is a challenge, especially for an organization that prefers to classify data and restrict access to it. An example military dataset may be one with images produced by thermal-imaging systems and labeled by experts to describe the weapon systems found in the image if any. Without sharing this with preprocessors and developers, an AI that uses that set effectively cannot be created.

 

AI systems are also vulnerable to becoming very large (and thus slow), and consequently susceptible to “dimensionality issues.” For example, training a system to recognize images of every possible weapon system in existence would involve thousands of categories. Such systems will require an enormous amount of computing power and lots of dedicated time on those resources. And because we are training a model, the best model requires an infinite amount of these images to be completely accurate. That is something we cannot achieve. Furthermore, as we train these AI systems, we often attempt to force them to follow “human” rules such as the rules of grammar. However, humans often ignore these rules, which makes developing successful AI systems for things like sentiment analysis and speech recognition challenge. Finally, AI systems can work well in uncontested, controlled domains. However, research is demonstrating that under adversarial conditions, AI systems can easily be fooled, resulting in errors. Certainly, many DoD AI applications will operate in contested spaces, like the cyber domain, and thus, we should be wary of their results.

 

An AI’s image-processing capability is not very robust when given images that are different from its training set—for example, images where lighting conditions are poor, that are at an obtuse angle, or that are partially obscured. Unless these types of images were in the training set, the model may struggle (or fail) to accurately identify the content.

 

Chat bots that might aid our information-operations missions are limited to hundreds of words and thus cannot completely replace a human who can write pages at a time. Prediction systems, such as IBM’s Watson weather-prediction tool, struggle with dimensionality issues and the availability of input data due to the complexity of the systems they are trying to model. Research may solve some of these problems but few of them will be solved as quickly as predicted or desired.

 

Another simple weakness with AI systems is their inability to multi-task. A human is capable of identifying an enemy vehicle, deciding a weapon system to employ against it, predicting its path, and then engaging the target. This fairly simple set of tasks is currently impossible for an AI system to accomplish. At best, a combination of AIs could be constructed where individual tasks are given to separate models. That type of solution, even if feasible, would entail a huge cost in sensing and computing power not to mention the training and testing of the system. Many AI systems are not even capable of transferring their learning within the same domain. For example, a system trained to identify a T-90 tank would most likely be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks are image recognition. Many researchers are working to enable systems to transfer their learning, but such systems are years away from production.

 

Artificial intelligence systems are also very poor at understanding inputs and context within the inputs. AI recognition systems don’t understand what the image is, they simply learn textures and gradients of the image’s pixels. Given scenes with those same gradients, AIs readily identify portions of the picture incorrectly. This lack of understanding can result in misclassifications that humans would not make, such as identifying a boat on a lake as a BMP.

 

This leads to another weakness of these systems—the inability to explain how they made their decisions. Most of what occurs inside an AI system is a black box and there is very little that a human can do to understand how the system makes its decisions. This is a critical problem for high-risk systems such as those that make engagement decisions or whose output may be used in critical decision-making processes. The ability to audit a system and learn why it made a mistake is legally and morally important. Additionally, issues on how we assess liability in cases where AI is involved are open research concerns. There have been many examples in the news recently of AI systems making poor decisions based on hidden biases in areas such as loan approvals and parole determinations. Unfortunately, work on explainable AI is many years from bearing fruit.

 

AI systems also struggle to distinguish between correlation and causation. The infamous example often used to illustrate the difference is the correlation between drowning deaths and ice cream sales. An AI system fed with statistics about these two items would not know that the two patterns only correlate because both are a function of warmer weather and might conclude that to prevent drowning deaths we should restrict ice cream sales. This type of problem could manifest itself in a military fraud prevention system that is fed data on purchases by month. Such a system could errantly conclude that fraud increases in September as spending increases when really it’s just a function of end-of-year spending habits.

 

Even without these AI weaknesses, the main area the military should be concerned with at the moment is adversarial attacks. We must assume that potential adversaries will attempt to fool or break any accessible AI systems that we use. Attempts will be made to fool image-recognition engines and sensors; cyberattacks will try to evade intrusion-detection systems; and logistical systems will be fed altered data to clog the supply lines with false requirements.

 

Adversarial attacks can be separated into four categories: evasion, inference, poisoning, and extraction. It has been shown that these types of attacks are easy to accomplish and often don’t require computing skills. Whether it involves real-world sensors or later manipulation of the resulting digital data, alteration of the input to an existing classifier is called “evasion.”

 

Evasion attacks attempt to fool an AI engine often in the hopes of avoiding detection—hiding a cyberattack, for example, or convincing a sensor that a tank is a school bus.

But deception attacks, although rare, can meddle with machine learning algorithms. Subtle changes to real-world objects can, in the case of a self-driving vehicle, have disastrous consequences.  McAfee researchers tricked a Tesla into accelerating 50 miles per hour above its intended speed by adding a two-inch piece of tape on a speed limit sign. The research was one of the first examples of manipulating a device’s machine learning algorithms.  While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome.  This is just one of many recently discovered attacks applicable to virtually any ML application.

 

The primary survival skill of the future may be the ability to hide from AI sensors. As a result, the military may need to develop a new type of AI camouflage to defeat AI systems because it’s been shown that simple obfuscation techniques such as strategic tape placement can fool AI.

 

Evasion attacks are often proceeded by inference attacks that gain information about the AI system that can be used to enable evasion attacks.

 

Another type of vulnerability occurs if an attacker can insert doctored data into the training set to achieve their malicious intent, which is known as “poisoning.” Mislabeled training data can move the decision boundary that separates different classifications. Here the threat would be enemy access to the datasets used to train our tools. Mislabeled images of vehicles to fool targeting systems or manipulated maintenance data designed to classify imminent system failure as normal operation may be inserted. Given the vulnerabilities of our supply chains, this would not be unimaginable and would be difficult to detect.

 

Extraction attacks exploit access to the AI’s interface to learn enough about the AI’s operation to create a parallel model of the system. If our AIs are not secure from unauthorized users, then those users could predict decisions made by our systems and use those predictions to their advantage. One could envision an opponent predicting how an AI-controlled unmanned system will respond to certain visual and electromagnetic stimuli and thus influence its route and behavior.

 

Similarly, an attacker can be most successful if they know the internal design details of a machine learning system, in what is called a “white-box” scenario. “The white-box case is interesting when you want to understand the worst-case scenario,” said Battista Biggio of the University of Cagliari in Sardinia, Italy. “We expect then that in practice these systems remain more secure also under more restrictive models of attack.” Effective security protocols can obscure both the design and the data to create more challenging “black-box” scenario, although a “gray-box” scenario, with some kinds of partial information known or inferred, is more realistic.

 

In 2013, Biggio and his coworkers showed how to fool a machine learning system by exploiting knowledge of the internal “gradients” it uses for training to design adversarial examples. “The basic idea is to use the same algorithm that is used for learning to actually bypass the classifier,” Biggio said. “It’s fighting machine learning with machine learning.”

 

Even if the system details are hidden, however, researchers have found that attacks that work against one system frequently work against others that have a different—possibly unknown—internal structure. This initially surprising observation reflects the power of deep learning systems to find patterns in data, Biggio said. “In many cases, different classifiers tend to learn the same correlations from the data.”

 

This “transferability” of attacks highlights the risk of a common training set like ImageNet, which contains a huge set of annotated images that is widely used for training vision systems. Although this common training corpus makes it easy to compare the performance of different classifiers, it makes them all potentially vulnerable to the same biases, whether malicious or not.

 

Developing Next Generation Military AI

DARPA announced in Sep 2018 a multi-year investment of more than $2 billion in new and existing programs called the “AI Next” campaign. Common sense reasoning is defined as “the basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate.” AI experts note the gap between AI inference and the ability to design systems that can draw directly on the rules of inference to achieve common sense reasoning. “Articulating and encoding this obscure-but-pervasive capability is no easy feat,” DARPA program managers note.

 

“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving,” said Dr. Walker. “Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

 

DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools. Towards this end, DARPA research and development in human-machine symbiosis sets a goal to partner with machines. Enabling computing systems in this manner is of critical importance because sensor, information, and communication systems generate data at rates beyond which humans can assimilate, understand, and act. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA is focusing its investments on a third wave of AI that brings forth machines that understand and reason in context.

 

Robust AI: AI technologies have demonstrated great value to missions as diverse as space-based imagery analysis, cyberattack warning, supply chain logistics and analysis of microbiologic systems. At the same time, the failure modes of AI technologies are poorly understood. DARPA is working to address this shortfall, with focused R&D, both analytic and empirical. DARPA’s success is essential for the Department to deploy AI technologies, particularly to the tactical edge, where reliable performance is required.

 

Adversarial AI: The most powerful AI tool today is machine learning (ML). ML systems can be easily duped by changes to inputs that would never fool a human. The data used to train such systems can be corrupted. And, the software itself is vulnerable to cyber attack. These areas, and more, must be addressed at scale as more AI-enabled systems are operationally deployed.

 

High Performance AI: Computer performance increases over the last decade have enabled the success of machine learning, in combination with large data sets, and software libraries. More performance at lower electrical power is essential to allow both data center and tactical deployments. DARPA has demonstrated analog processing of AI algorithms with 1000x speedup and 1000x power efficiency over state-of-the-art digital processors, and is researching AI-specific hardware designs. DARPA is also attacking the current inefficiency of machine learning, by researching methods to drastically reduce requirements for labeled training data.

 

Next Generation AI: The machine learning algorithms that enable face recognition and self-driving vehicles were invented over 20 years ago. DARPA has taken the lead in pioneering research to develop the next generation of AI algorithms, which will transform computers from tools into problem-solving partners. DARPA research aims to enable AI systems to explain their actions, and to acquire and reason with common sense knowledge. DARPA R&D produced the first AI successes, such as expert systems and search, and more recently has advanced machine learning tools and hardware. DARPA is now creating the next wave of AI technologies that will enable the United States to maintain its technological edge in this critical area.

 

 

Lethal Autonomous Weapon Systems (LAWS)

AI systems are also divided into narrow and Artificial general AI (AGI). The current systems are predominantly narrow AI, which are purpose-built to perform a limited task. In future AGI systems would be able to learn, plan, reason, communicate in natural language, and integrate all these skills and apply them to any task.

 

One of the risks of General AI is that it would speed development of LAWS. These weapon systems that can make life and death decisions without human intervention. They will use sensor suites and AI based computer algorithms to autonomously classify a target as hostile, make an engagement decision and then guide a weapon to the target. Many organizations including UN have called for a global ban on lethal autonomous weapons systems.

 

Moving beyond AGI is the concept of artificial super intelligence (ASI). ASI systems will become self-aware and self-vigilant and will surpass the human intelligence in every aspect from creativity to problem-solving. This type of AI may even present an existential threat to humanity according to some experts.

 

According to Gary Marcus, professor of cognitive science at N.Y.U.  “The smarter machines become, the more their goals could shift. “Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity’ or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”

 

We have come back to full circle again we started with role of AI with global security and now we find it is a global threat.

 

 

References and Resources also include:

https://mwi.usma.edu/artificial-intelligence-future-warfare-just-not-way-think/

About Rajesh Uppal

Check Also

India’s Advances in AI Weaponization Amid Global Military AI Race

As the global military landscape evolves with advancements in Artificial Intelligence (AI), India is making …

error: Content is protected !!