Trending News
Home / Technology / AI & IT / Opportunities and Challenges of applying AI/ML & Deep Learning technologies in Military and Cyber Security

Opportunities and Challenges of applying AI/ML & Deep Learning technologies in Military and Cyber Security

Many nations are racing to achieve a global innovation advantage in artificial intelligence (AI) because they understand that AI is a foundational technology that can boost competitiveness, increase productivity, protect national security, and help solve societal challenges. Nations wherein firms fail to develop successful AI products or services are at risk of losing global market share. As Andrew Moore, former dean of computer science at Carnegie Mellon University and current head of Google Cloud AI stated, this part of the race will determine “who will be the Googles, Amazons, and Apples in 2030.” Nations that lag in AI adoption will see diminished global market share in a host of industries, from finance to manufacturing to mining. And nations that underinvest in AI R&D, particularly for military applications, will put their national security at risk. Consequently, nations that fall behind in the AI race can suffer economic harm and weakened national security, thereby diminishing their geopolitical influence.

 

Artificial Intelligence (AI) is becoming a critical part of modern warfare. Compared with conventional systems, military systems equipped with AI are capable of handling larger volumes of data more efficiently. Additionally, AI improves self-control, self-regulation, and self-actuation of combat systems due to its inherent computing and decision-making capabilities. A new Harvard Kennedy School study concludes AI could revolutionize war as much as nuclear weapons have done.

 

The promise of AI—including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making—is driving militaries around the world to accelerate research and development. AI race has ensued between countries like US, China and Russia to take a lead in this strategic technology.

 

AI is further divided into two categories: narrow AI and general AI. Narrow AI systems can perform only the specific task that they were trained to perform, while general AI systems would be capable of performing a broad range of tasks, including those for which they were not specifically trained. General AI systems do not yet exist. Narrow AI is currently being incorporated into a number of military applications by both the United States and its competitors. This was made possible by the advancement in Big Data, Deep Learning, and the exponential increase of chip processing capabilities, especially GPGPUs. Big Data is a term used to signify the exponential growth of data taking place, as 90% of the data in the world today has been created in the last two years alone.

 

Deep learning is a type of machine learning in which a model learns to perform classification tasks directly from images, text, or sound. Deep learning is usually implemented using a neural network architecture consisting of multiple layers of nonlinear processing units.  In this context, a neuron refers to a single computation unit where the output is a weighted sum of inputs that passed a (nonlinear) activation function (e.g., a function that passes the signal only if it is positive).

 

A deep neural network refers to systems with large virtual networks that combine multiple nonlinear processing layers of neurons operating in parallel and inspired by biological nervous systems.  It consists of an input layer, several hidden layers, and an output layer. The layers are interconnected via nodes, or neurons, with each hidden layer using the output of the previous layer as its input.

 

The term “deep” refers to the number of layers in the network—the more layers, the deeper the network. Traditional neural networks contain only 2 or 3 layers, while deep networks can have hundreds. Each layer in the network takes in data from the previous layer, transforms it, and passes it on. The network increases the complexity and detail of what it is learning from layer to layer. Therefore Deep learning, consume often a very large amount of raw input data. They process this data through many layers of nonlinear transformations of the input data in order to calculate a target output. The deep learning (DL) algorithms allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification. Representation learning is one of the main reasons for the high performance of DNNs. Using DL and DNNs it is no longer necessary to manually craft the features required to learn a specific task. Instead, discriminating features are automatically learned during the training of a DNN.

 

Over the past decade, DNNs have become the state-of-the-art algorithms of Machine Learning in object recognition, face recognition, text translation, speech recognition, computer vision, natural language processing, and and advanced driver assistance systems, including, lane classification and traffic sign recognition.

 

Military and security applications

In a military context, the potential for AI is present in all domains (i.e. land, sea, air, space, and information) and all levels of warfare (i.e. political, strategic, operational, and tactical). For instance, at the political and strategical levels, AI can be used to destabilize an opponent by producing and publishing massive quantities of fake information. In this case, AI will most likely also be the best candidate to defend against such attacks. At the tactical level, AI can improve partly autonomous control in unmanned systems so that human operators can operate unmanned systems more efficiently, ultimately, increase battlefield impact.

 

A rapidly increasing volume of intelligence, surveillance, and reconnaissance (ISR) information is available to the Department of Defense (DOD) as a result of the increasing numbers, sophistication, and resolution of ISR resources and capabilities. “The amount of video data produced annually by Unmanned Aerial Vehicles (UAVs) alone is in the petabyte range, and growing rapidly. Full exploitation of this information is a major challenge. Human observation and analysis of ISR assets is essential, but the training of humans is both expensive and time-consuming. The human performance also varies due to individuals’ capabilities and training, fatigue, boredom, and human attentional capacity, one response to this situation is to employ machines …” said DARPA.

 

Deep learning techniques have proven in the initial phases to be most useful for pattern-recognition tasks such as natural-language processing and image feature detection. Taking this approach a step further, deep learning is also a good candidate to be applied to on-­platform processing of streaming signal or image data. These systems would have the power to sift through voluminous streams of data looking for either signals or targets of interest that can help support decision-making by humans as well as autonomous systems. AI with geospatial analysis can help in the extraction of valuable intelligence from these ISR assets.  This information can help in the detection of any illegal or suspicious activities and alert the concerned authority.

 

Robots with AI and computer vision with IoT can also help in target identification and classification. Integration of machine learning and geospatial analysis with the military’s logistical systems reduces the amount of effort, time, and error.

 

As the goals for defense systems move in the direction of greater autonomy, deep learning techniques that were once too tough for more traditional processing technologies can now be supported. Newly available technologies are driving how deep learning can be used for defense applications. These techniques include very large field-programmable gate arrays (FPGAs), power-efficient general-processing units (GPUs), and new single-instruction/multiple data (SIMD) processing units that work with today’s more flexible multicore processors. The intense computing power these components offer greatly surpasses the processing limitations that made real-time deep learning architectures virtually impossible.

 

In past, DOD had funded projects to put ANNs in M1A1 Abrams tanks as engine diagnostic tools. Officials also considered using them as automated target-recognition tools on board the canceled Comanche helicopter. Naval Research Laboratory had worked on the multisensor fire-recognition system, which uses neural networks embedded in video cameras. There are reports about military researchers attempting to use an ANN to detect tanks amid the foliage. Intelligence agencies are also interested in searching videos based on content like martyrdom videos of people planning a suicide bombing, or IED-placement videos.  Fully autonomous weapons such as missiles with AI-embedded technology have the capability to recognize targets and analyze the target range for kill zones without any human intervention.

 

AI and machine learning challenges in the Military domain

The most critical ones for military AI are  1) transparency, 2) vulnerabilities, and 3) learning even in the presence of limited training data. Other important, but less critical, challenges related to optimization, generalization, architectural design, hyper-parameter tuning, and production-grade deployment.

 

Enhancing user trust and transparency

Many applications require, in addition to high performance, high transparency, high safety, and user trust or understanding. Such requirements are typical in safety-critical systems, surveillance systems, autonomous agents, medicine, and other similar applications. The required transparency of AI depends on the end-users needs. Transparency may concern user need for  Trust in situations where it is difficult for users to question system recommendations. However, it may be unclear whether user trust is based on system performance or robustness, performance relative to the user, or how comfortable the user is with system recommendations.

Another aspect is fairness to avoid systematic biases that may result in unequal treatment for some cases. For example, evaluation of credit applications should not be based on personal attributes, such as sex or ethnicity, although such attributes may distinguish population groups on an overall statistical level.

 

There are in principle, two ways to make AI systems transparent. Firstly, some types of models are perceived as more interpretable than others, such as linear models, rule-based systems, or decision trees. Inspection of such models gives an understanding of their composition and computation. Lipton describes how the interpretability depends on whether users can predict system recommendations, understand model parameters, and understand the training algorithm. Secondly, the system may explain its recommendations. Such explanations may be textual or visual. For example, by indicating what aspects of an image mostly contributes to its classification.

 

Examples of feature visualization

Although DNNs offer high performance in many applications, their sub-symbolic computations with perhaps millions of parameters makes it difficult to understand exactly how input features contribute to system recommendations. Since DNNs high performance is critical for many applications, there is a considerable interest in how to make them more interpretable . Many algorithms for interpreting DNNs transform the DNN-processing into the original input space in order to visualize discriminating features. Typically, two general approaches are used for feature visualization, activation maximation and DNN explanation.

Activation maximation computes which inputs features that will maximally activate possible system recommendations. For image classification, this represents the ideal images that show discriminating and recognizable features for each class. However, the images often look unnatural since the classes may use many aspects of the same object and the semantic information in images is often spread out. Some examples of methods for activation maximation are gradient ascent, better regularization to increase generalizability, and synthesizing preferred images

 

Addressing the ‘black box’ problem

One of the major unresolved issue in ML/DNN networks  is that when things go wrong, scientists are often at a loss to explain why. This is due to a lack of understanding of the decision-making within the AI systems. This issue is called the ‘black box’ problem.

 

As cognitive psychologist Gary Marcus writes at the New Yorker, the methods that are currently popular “lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like ‘sibling’ or ‘identical to.’ They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.” In other words, they don’t have any common sense. This implies DNNs are still not ready for use in real world applications such driverless cars.

 

DNN explanation explains system recommendations by highlighting discriminating input features. In image classification, such visualizations may highlight areas that provide evidence for or against a certain class or only show regions that contain discriminating features. One approach for calculating discriminating features is sensitivity analysis using local gradients or another measure of variation

 

Learning even in the presence of limited training data.

Developing ML-based applications in a military context is challenging because the data collection procedures in military organizations, training facilities, platforms, sensor networks, weapons, etc. were initially not designed for ML-purposes. As a result, in this domain it is often difficult to find real-world, high-quality and sufficiently large datasets that can be used to learn from and gain insight into.

Transfer learning

Transfer learning is a technique that is commonly used when datasets are small and when computational resources are limited. The idea is to reuse the parameters of pre-trained models, typically represented by DNNs when developing new models targeting other, but similar, tasks. There are at least two approaches that can be used for transfer learning in DL applications:

• Relearning the output layer: Using this approach, the last layer of the pre-trained model is replaced with a new output layer that matches the expected output of the new task. During training, only the weights of the new output layer are updated, all others are fixed.
• Fine tuning the entire model: This approach is similar to the first but in this case, the weights of the entire DNN may be updated. This approach typically requires more training data.

It has been shown that transfer learning may also boost the generalization capabilities of a model. How- ever, the positive effects of transfer learning tend to decrease as the distance between the source task and target task increases

 

Generative adversarial networks

Generative adversarial networks (GANs), invented by Goodfellow et al., is a generative model that can be use d for semi-supervised learning where a small set of labeled data is combined with a larger set of unlabeled data to improve the performance of a model. The basic GAN implementation consists of two DNNs representing a generator and a discriminator. The generator is trained to produce fake data and the
discriminator is trained to classify data as real or fake. When the two networks are simultaneously trained, improvements to one network will also result in improvements to the other network until, finally, an equilibrium has been reached. In semi-supervised learning, the main objective of the generator is to produce unlabeled data that can be used to improve the overall performance of the final model. GANs have, in addition to semi-supervised learning, also been used for:

• Reconstruction: Filling the gaps of partly occluded images or objects.
• Super-resolution: Converting images from low resolution to high resolution.
• Image-to-image translation: Converting images from winter to summer, night to day, etc.. A military application of this technique could be to convert night-vision images to daylight images.

Modeling and simulation

Modeling and simulation has been used extensively by the military for training, decision support, studies, etc. As a result, there are lots of already validated models that have been developed over long periods of time that could also potentially be used to generate synthetic data for ML-applications. As an example, a flight-simulator could be used to generate synthetic images of aircrafts placed in different environmental settings. Labeling is in this case automatic since the aircraft type is known prior to generating the synthetic image. However, not surprisingly, using synthetic images may result in poor performance when applying the model to real-world images. One approach that is currently being explored is to enhance the synthetic image using GANs to make it photo-realistic.

 

Cyber Security concerns

Artificial intelligence (AI) and machine learning (ML) offer all the same opportunities for vulnerabilities and misconfigurations as earlier technological advances, but they also have unique risks.

 

There are two main aspects that are currently limiting the development of the AI industry 1) data insecurity and 2) algorithm insecurity, which is in danger of being prey to hackers of known origin. There are two different aspects of vulnerabilities of DNNs: 1) vulnerability for manipulation of input and 2) vulnerability for manipulation of the model. ML systems can also be easily duped by changes to inputs that would never fool a human. The data used to train such systems can be corrupted. And, the software itself is vulnerable to cyber attack.

 

AI and ML systems require three sets of data: Training data to build a predictive model; Testing data to assess how well the model works; and Live transactional or operational data when the model is put to work. While live transactional or operational data is clearly a valuable corporate asset, it can be easy to overlook the pools of training and testing data that also contains sensitive information.

 

AI systems also want contextualized data, which can dramatically expand a company’s exposure risk. Say an insurance company wants a better handle on the driving habits of its customers, it can buy shopping, driving, location and other data sets that can easily be cross-correlated and matched to customer accounts. That new, exponentially richer data set is more attractive to hackers and more devastating to the company’s reputation if it is breached. Many of the principles used to protect data in other systems can be applied to AI and ML projects, including anonymization, tokenization and encryption.

 

The volume and processing requirements mean that cloud platforms often handle the workloads, adding another level of complexity and vulnerability. It’s no surprise that cybersecurity is the most worrisome risk for AI adopters. According to a Deloitte survey released in July 2020, 62% of adopters see cybersecurity risks as a major or extreme concern, but only 39% said they are prepared to address those risks.

 

Facial recognition use has been rapidly increasing both in commercial products, as well as by law enforcement lately. However, studies have also found some weaknesses with DNNs. The study has revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library).

 

Provided a DNN, it has been found that it is easy to adjust input signal so that the classification system fails completely. When the dimension of the input signal is large, which is typically the case for e.g. pictures, it is often enough with an imperceptible small adjustment of each element (i.e. pixel) in the input to fool the system. With the same technique used to train the DNN, typically a stochastic gradient method,
you can easily find in which direction, by looking at the sign of the gradient, each element should be changed to allow the classifier to wrongly pick a target class or simply just misclassify. With only a few lines of code, the best image recognition systems are deceived to believe that a picture of a vehicle instead shows a dog.

 

The above method assumes having full access to the DNN, i.e., a so-called white-box attack. It has been found that even so-called black-box attacks, where you only have insight into the system’s type of input and output, are possible. In one reported case, the authors train a substitute network using data obtained from sparse sampling of the black-box system they want to attack. Given the substitute network you can then use the white-box attack method mentioned above to craft adversarial inputs. Another alternative to learning a substitute network is where instead a genetic algorithm is used to create attack vectors leading to misclassifications by the system. The same authors even show that it is often enough to modify a single pixel in the image, although often perceptible, to achieve a successful attack.

 

Another security risk specific to AI and ML systems is data poisoning, where an attacker feeds information into a system to force it to make inaccurate predictions. For example, attackers may trick systems into thinking that malicious software is safe by feeding it examples of legitimate software that has indicators similar to malware.

 

Models developed using ML are known to be vulnerable to adversarial attacks. For instance, a DL-based model can easily be deceived through manipulation of the input signal even if the model is unknown to the attacker. As an example, unmanned aerial vehicles (UAVs) using state-of-the-art object detection can potentially be deceived by a carefully designed camouflage pattern on the ground.

 

Another study by trio of researchers in the U.S. has found that deep neural networks (DNNs) can be tricked into “believing” an image it is analyzing is of something recognizable to humans when in fact it isn’t. They showed that it is easy to produce images that are completely unrecognizable to humans that the state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion).

 

“When military researchers attempting to use an ANN to detect tanks amid foliage.  Scientists fed pictures into a neural network of trees with and without tanks parked beneath them. At first, they had stunning success — the machine had a 100 percent detection rate. But when they tried reproducing the results with new data, the ANN failed,” reported DAVID PERERA in defense systems. “The computer hadn’t learned to detect tanks at all. Instead, it had focused on the color of the sky to determine whether tanks were present because the test photos had been taken on different days. In the pictures with the tanks, the sky was cloudy; in the pictures without tanks, the sky was bright blue. The network had learned to recognize the difference in the weather.”

 

Exploiting hidden backdoors in pre-trained DNNs

When designing a DNN, but only having access to a small amount of training data, it is common to use pre-trained models to achieve good performance. The concept is called transfer learning and a common procedure is to take a model that is trained on a large amount of data, replace and customize the last layers in the network to the specific problem, and then fine-tune the parameters in the final stages (and sometimes even the entire system) using the available training data. There are already a large amount of pre-trained models available for
download from the Internet.

 

However, such pre-trained models could be a source of malware inserted into them through backdoors.  This type of vulnerability is considered in where the authors insert backdoors into a model for recognizing US traffic signs. For example, a sticker is trained on a stop sign to belong to a class other than stop signs. They then show that a system, based on the US traffic sign network, for recognizing Swedish traffic signs reacts negatively (greatly impairing the classification accuracy of the Swedish traffic sign system) when using the backdoor (i.e., placing a sticker on the traffic sign).

Defense methods

One way to reduce the vulnerability of the DNNs to manipulation of the input signal is to explicitly include manipulated/adversarial examples in the training process of the model. That is, in addition to the original training data adversarial examples are generated and used in the training of the model.

 

Another method is to use a concept called defense distillation. Briefly described, the method tries to reduce the requirement that the output signal only point out the true class and force the other classes to have zero probability. This is done in  in two steps. The first step is a regular training of a DNN. In the second step, the output (class probabilities) of the first neuron network is used as a new class labels and a new system (with the same architecture) is trained using the new (soft) class labels. This has been shown to reduce vulnerability, because you do not fit the DNN too tight against the training data, and preserve some reasonable class interrelations.

 

Other defense methods, are for instance feature squeezing techniques such as e.g., mean or median filtering or nonlinear pixel representations such as one-hot or thermometer encodings. Unfortunately, neither of the methods described completely solves the vulnerability problem, especially not if the attacker has full insight into the model and the defense method, write Dr Peter Svenmarck, Dr Linus Luotsinen, Dr Mattias Nilsson, Dr Johan Schubert of Swedish Defence Research Agency

 

Solving deep learning challenges with HPEC

As data explodes in velocity, variety, veracity, and volume, it is getting increasingly difficult to scale compute performance using enterprise-class servers and storage in step with the increase. Advancements in high-performance embedded computing (HPEC) platforms have come a long way in not only handling deep learning algorithms, but also in meeting size, weight, power, and cost (SWaP)-constrained system requirements.

 

Technologies such as high-speed switched serial links, rugged standardized form factors, and HPEC middleware can be employed with much success for deep learning applications. These technologies have been developed and honed over the years to address HPEC challenges such as synthetic aperture radar (SAR) and military signal intelligence (SIGINT) applications. The challenge for the system integrator, therefore, is to define how deep learning algorithms can be applied to solve their particular problem.

 

Using HPEC-based systems, the military gets a ready solution for the vast data crunching needs required for deep learning. This data explosion is particularly evident for information that must be evaluated in real time. Therefore, the opportunity grows in dynamic military environments for deep learning technology solutions that can streamline analysis and enable faster decision-making through critical insights in handling immediate threats. Additional applications include intelligence gathering to help better assess battle scenarios, enable faster situational analysis in the air or on the ground, and even provide an edge in understanding enemy or terrorist groups through greater insight into how they behave and communicate.

 

It is possible to build modular HPEC systems optimized for deep learning applications with readily available platforms. For instance, Kontron’s VX3058 3U VPX board enables server-class computing capabilities via the advanced eight-core version of the Intel Xeon Processor D architecture (Broadwell DE). The Kontron VX3058 enables high-level digital signal processing (DSP) performance and is rugged­ized for harsh environments. Kontron’s StarVX HPEC system integrates the VX3058 to leverage the same processor performance capabilities of the Intel Xeon D-1540. This type of HPEC platform meets footprint reduction demands through operational computers consolidation via server virtualization.

 

 

 

 

 

References and resources also include:

https://defensesystems.com/Articles/2008/07/Neural-nets-find-niche.aspx

http://mil-embedded.com/articles/applying-techniques-expand-defense-capabilities/

https://www.c4isrnet.com/artificial-intelligence/2019/10/31/will-the-pentagon-adopt-these-five-ai-principles/

https://www.csoonline.com/article/3434610/how-secure-are-your-ai-and-machine-learning-projects.html

file:///C:/Users/Laptop/Downloads/MP-IST-160-S1-5.pdf

 

 

 

About Rajesh Uppal

Check Also

AI at Lightspeed: Supercharging Efficiency and Speed with Photonic Chips

Introduction In the ever-evolving landscape of artificial intelligence (AI), researchers and engineers are constantly pushing …

error: Content is protected !!