Home / Technology / AI & IT / Airforce develop Artificial Intelligence and Autonomous Systems to meet challenges of Cyber-Attacks, Anti-Access/Area-Denial (A2SD) actions and Space Threats

Airforce develop Artificial Intelligence and Autonomous Systems to meet challenges of Cyber-Attacks, Anti-Access/Area-Denial (A2SD) actions and Space Threats

The current Armed drones like Predators and Reapers are essentially remote controlled that is they involve human operators remotely controlling the vehicle with the assistance of fairly low levels of automation for some functions (e.g., the operator specifies waypoints to be followed by the platform). Systems like these have proved to be successful in permissive environments like Afghanistan and Iraq; However Predators and Reapers are useless in a contested environment, i.e.  are at risk when confronted with advanced air defense systems. The Pentagon needs to move away from Predator and Reaper unmanned systems and establish a fleet of intelligence, surveillance and reconnaissance (ISR) aircraft that can handle contested environments, said a top Air Force general.

 

Anti-access and area denial (A2/AD)  environment could be countered by highly autonomous systems. Autonomy is a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, “self-governing.” As described in the most recent Unmanned Systems Roadmap, there are four levels of autonomy: Human Operated, Human Delegated, Human Supervised, and Fully Autonomous. However, the Roadmap notes that in contrast to automatic systems, which simply follow a set of preprogrammed directions to achieve a predetermined goal, autonomous systems “are self-directed towards a goal in that they do not require outside control, but rather are governed by laws and strategies that direct their behavior.

 

 

Autonomy and Autonomous Platforms and Weapons

One of the advantages of  employing autonomous weapons and platforms is  speed.  OODA loop, a four-step decisionmaking process in which an individual observes, orients, decides, and then acts. In combat, there is a premium on completing one’s own decision loop as quickly as possible while at the same time disrupting, or at least delaying, that of one’s opponent. This is known as getting “inside” an enemy’s OODA cycle. As research into AI continues to advance, computerized combat systems are likely to be able to quickly analyze a situation and then provide a recommended course of action to a military commander. If so, the human element would represent the slowest part of the decision loop, and by enhancing autonomy by  eliminating direct human involvement altogether and replacing it with a system that is fully autonomous would lead to speeding up of OODA loop.

 

Another potential advantage offered by autonomous platforms and weapons lies in their relative immunity to Jamming and other  electronic and cybersecurity attacks. Current unmanned systems, that require almost constant communication between themselves and their human operators which can be Jammed in highly contested  battlefield.  A sufficiently autonomous system, on the other hand, would be able to execute its mission even if data links are compromised. If introduced on a large scale, they also  have the potential to greatly reduce manpower requirements and the associated costs.Unlike human beings, autonomous systems do not need to be trained, fed, housed, or paid, nor do they require medical care or retirement pay.

 

“Unmanned systems and autonomous software offer significant potential advantages for meeting the challenges of a newly forming adversarial environment. Speed of light cyber-attacks, anti-access/area-denial (A2SD) actions that keep our forces operating at a distance, and potential attacks on our space-based assets all require innovative solutions for maintaining mission effective air, space and cyber operations in the face of these new challenges.”

 

Autonomous systems provide a considerable opportunity to enhance future Air Force operations by potentially reducing unnecessary manning costs, increasing the range of operations, enhancing capabilities, providing new approaches to air power, reducing the time required for critical operations, and providing increased levels of operational reliability, persistence and resilience.

 

This is the view of Air Force Office of the Chief Scientist that has come with report “Autonomous Horizons Volume I: Human Autonomy Teaming”. It serves to provide direction and guidance on the opportunities and challenges for the development of autonomous systems for Air Force operations. In laying out a vision for the next 30 years, the U.S. Air Force strategy provides an emphasis on the development of system autonomy as a means of achieving strategic advantage in future operations.

 

This includes an increased use of automation software and more advanced algorithms that enable systems to act autonomously or “react to their environment and perform more situational-dependent tasks as well as synchronized and integrated functions with other autonomous systems”.

 

Application Areas for Autonomy

Increased levels of autonomy can be brought to bear to enhance operations in both manned and unmanned aircraft and in operations in space, cyber, command and control, intelligence, surveillance and reconnaissance, readiness, and sustainment across the Air Force.

 

Autonomously operating [unmanned aircraft] UA could assume several functions now performed by manned aircraft in areas that are difficult to access (e.g., aerial refueling, airborne early warning, intelligence, surveillance, reconnaissance, anti-ship warfare, and command),” it said. “Additionally, large UA could be designed to dispense small UA that could operate autonomously to facilitate both offensive strike (via electronic warfare, communications jamming, or decoys), as well as defensive measures as decoys, sensors and emitters, target emulators, and so on — to confuse, deceive, and attrite adversary attacks. These small swarms could be deployed as perimeter and close-in defensive actions with payloads tailored to the situation, ” says The Defense Science Board  summer study report on autonomy.

 

Space Security

Our space assets have come under risk, however, due to activities of adversaries to degrade, deny, or disrupt our ability to operate in space. Autonomy provides a means to build resilient networks that can reconfigure themselves in the face of such attacks, preserving essential functions under duress.

It also provides a mechanism for significantly reducing the extensive manpower requirements for manual control of satellites and generation of space situation awareness through real-time surveillance and analysis of the enormous number of objects in orbit around the Earth

 

Network Centric Operations for A2/AD

In the future, a far more integrated network of air, space, and cyber assets will operate in close coordination to provide desired effects. Autonomy can perform a number of important functions to support Network Centric Operations vision, including:

•Dynamic reconfiguration for maintaining an effective battlespace network, particularly in the face of anti-access/area-denial (A2/AD) activities by potential adversaries,

• Integrating information across multiple sensors, platforms and sources,

• Fusing information in effective ways to provide not just data (level 1 situation awareness), but also meaningful understanding of the data in light of operational goals (level 2 situation awareness) and projections of future actions and events (level 3 situation awareness) matched to individual airman mission roles and decision needs,

• Intelligent flows of information across the network and information prioritization to ensure that needed information is provided to the right platforms and airmen in the system, and

• Assistance in mission planning, re-planning, monitoring, and coordination activities.

Cyber Security

Due to the rapidity of cyber-attacks, and the sheer volume of attacks that could potentially occur, there is a need for autonomy that can react in milliseconds to protect critical systems and mission components. As these speeds are far faster than human operators can perform, system autonomy will form a critical aspect of cyber defense.

In addition, the ever-increasing volume of novel cyber threats creates a need for autonomous defensive cyber solutions, including cyber vulnerability detection and mitigation; compromise detection and repair (self-healing); real-time response to threats; network and mission mapping; and anomaly resolution.

 

Challenges for the Development of Autonomous Systems

There are a number of technical challenges associated with developing successful autonomous systems for future Air Force systems that must operate in complex, dynamic, and often unpredictable environments. Creating systems that can accurately not only sense but also understand (recognize and categorize) objects detected, and their relationship to each other and broader system goals, has proven to be significantly challenging for automation, especially when unexpected (i.e., not designed for) objects, events, or situations are encountered.

Some of the challenges of automation and autonomy for airman interaction are

(1) Difficulties in creating autonomy software that is robust enough to function without human intervention and oversight,

(2) The lowering of human situation awareness that occurs when using automation leading to out-of-the-loop performance decrements,

(3) Increases in cognitive workload required to interact with the greater complexity associated with automation,

(4) Increased time to make decisions when decision aids are provided, often without the desired increase in decision accuracy, and

(5) Challenges with developing a level of trust that is appropriately calibrated to the reliability and functionality of the system in various circumstances.

 

The Air Force Research Laboratory Autonomy Science and Technology Strategy describe several key goals for addressing these challenges, including:
(1) Deliver flexible autonomy systems with highly effective human-machine teaming,

(2) Create actively coordinated teams of multiple machines to achieve mission goals,

(3) Ensure operations in complex contested environments, and

(4) Ensure safe and effective systems in unanticipated and dynamic environments

 

Effective human-autonomy teams

Given that it is unlikely that autonomy in the foreseeable future will work perfectly for all functions and operations, and that airman interaction with autonomy will continue to be needed at some level, create the need for a new approach to the design of autonomous systems that will allow them to serve as an effective teammate with the airmen who depend on them to do their jobs.

 

With the current strategic environment – a return to so-called great power competition and a limited inventory of fighter and bomber aircraft – one solution offered by the Mitchell Institute for Aerospace Studies is another look at manned-unmanned aircraft teaming concepts.  Air Force should explore the advantages that could be yielded through collaborative teaming of manned and unmanned combat aircraft. This combination may provide increased numbers of affordable aircraft to complement a limited number of exquisite, expensive, but highly potent fifth-generation aircraft.

 

The Air Force wants its sixth-generation fighter aircraft to have a squad of uncrewed systems flying at its side. Secretary of the Air Force Frank Kendall has pitched the Air Force’s Next-Generation Air Dominance program as a package deal of crewed and uncrewed systems. The collaborative combat aircraft program is funded to start in 2024, industry executives said they are gearing up their autonomous capabilities to expand the potential for manned and unmanned teaming. Autonomy is one of three must-haves for the system, along with resilient communications links and the authority for the system to freely move.

 

“AI today is very database intensive. But what can and should it be in the future? How can we graduate it from just being a database to something that can leverage concepts and relationships, or emotions and predictive analyses? And so there’s so much more that we as humans can do that AI cannot? How do we get to that?,” Maj. Gen. Heather Pringle, Commanding General of the Air Force Research Lab, told Warrior in an interview.

 

However, as Pringle explains, there are still many yet-to-be understood complexities and variables, and there are many things specific to human consciousness and decision making which seem well beyond the reach of what AI-enabled systems can do.

 

For example, as Pringle said, what about emotion? Certain subjective or more nuanced and varied concepts? Is there a way these kinds of cognitive phenomena could be accurately tracked by computers? Certainly would seem difficult, as an interwoven blend of emotional, philosophical and even psychological variables could all inform human behavior and human perceptions.

 

Part of the solution, Pringle explained, lies in increasing the ability for human-machine interface, meaning each can inform the other in a way to optimize data analysis and decision making. Pringle described this as a “symbiotic relationship.”

 

“What is the role of the human? How are they managing these systems? How can we ease their cognitive load? How can we be most efficient with the number of vehicles? What is the right ratio of the vehicles? There are a lot of really great S&T questions to answer,” Pringle said.

 

“There’s a lot of challenges to address when you’re looking at increasing the number of systems and the number of platforms, due to the integration and the data links between them,” Pringle said.

 

Towards Symbiotic Human-Autonomy Systems

Thanks to advancements in autonomy, processing power and information exchange capabilities, the Air Force will soon be able to fly traditionally manned combat aircraft in partnership with unmanned aircraft,” the report said. Manned-unmanned teaming could serve a broad array of missions such as air superiority, strike, intelligence, surveillance and reconnaissance, and electronic warfare, Birkey said. Autonomous platforms can also be stationed abroad as a deterrent to adversaries without the personnel and logistical requirements that are currently associated with a manned asset, he added.

 

Air Force to pursue a partnering concept where a manned F-35 joint strike fighter could potentially team up with autonomously operated F-16 multirole fighters for a variety of missions. “ In 2017, the Air Force Research Laboratory and Lockheed Martin’s SkunkWorks unit successfully demonstrated how manned-unmanned teaming could boost combat efficiency as part of the “loyal wingman” effort. An experimental F-16 aircraft autonomously reacted to a threat environment during an air-to-ground strike mission demonstration as part of the Have Raider II effort.

Air Force embraces the agility, intelligence and innovation that airmen provide, along with the advanced capabilities of autonomy, to create effective teams in which activities can be accomplished smoothly, simply and seamlessly.

 

Flexible Autonomy

Flexible autonomy will allow the control of tasks, functions, sub-systems, and even entire vehicles to pass back and forth over time between the airman and the autonomous system, as needed to succeed under changing circumstances. Many functions will be supported at varying levels of autonomy, from fully manual, to recommendations for decision aiding, to human-on-the-loop supervisory control of an autonomous system, to one that operates fully autonomously with no human intervention at all.

 

Demonstration of Manned/Unmanned Teaming

Early iterations of this have already been demonstrated through the Air Force’s Valkyrie program in which an unmanned system flew alongside an F-35 and F-22 while sharing information in real time. By extension, the Valkyrie drone has even itself launched mini-drones. The Valkyrie launched a Kratos-built ALTIUS-600 mini drone in what the Air Force describes as the first ever opening of its internal weapons bay. The Valkyrie is configured to drop bombs and fire weapons as part of a manned-unmanned teaming operational scope.

 

Lockheed Martin and the Air Force Research Laboratory (AFRL),successfully demonstrated manned/unmanned teaming to improve combat efficiency and effectiveness for the warfighter. “This demonstration is an important milestone in AFRL’s maturation of technologies needed to integrate manned and unmanned aircraft in a strike package,” said Capt. Andrew Petry, AFRL autonomous flight operations engineer. “We’ve not only shown how an Unmanned Combat Air Vehicle can perform its mission when things go as planned, but also how it will react and adapt to unforeseen obstacles along the way.”

 

During the flight demonstration, an experimental F-16 aircraft acted as a surrogate Unmanned Combat Air Vehicle (UCAV) autonomously reacting to a dynamic threat environment during an air-to-ground strike mission.

The demonstration success included three key objectives:

  • The ability to autonomously plan and execute air-to-ground strike missions based on mission priorities and available assets
  • The ability to dynamically react to a changing threat environment during an air-to-ground strike mission while automatically managing contingencies for capability failures, route deviations, and loss of communication
  • A fully compliant USAF Open Mission Systems (OMS) software integration environment allowing rapid integration of software components developed by multiple providers

 

The two-week demonstration at the Test Pilot School at Edwards Air Force Base, California, is the second in a series of manned/unmanned teaming exercises to prove enabling technologies.

 

“The Have Raider II demonstration team pushed the boundaries of autonomous technology and put a fully combat-capable F-16 in increasingly complex situations to test the system’s ability to adapt to a rapidly changing operational environment,” said Shawn Whitcomb, Lockheed Martin Skunk Works Loyal Wingman program manager. “This is a critical step to enabling future Loyal Wingman technology development and operational transition programs.”

 

The first demonstration, Have Raider I, focused on advanced vehicle control. The experimental F-16 autonomously flew in formation with a lead aircraft and conducted a ground-attack mission, then automatically rejoined the lead aircraft after the mission was completed. These capabilities were linked with Lockheed Martin automatic collision avoidance systems to ensure safe, coordinated teaming between the F-16 and surrogate UCAV.

 

Effective manned/unmanned teaming reduces the high cognitive workload, allowing the warfighter to focus on creative and complex planning and management. Autonomous systems also have the ability to access hazardous mission environments, react more quickly, and provide persistent capabilities without fatigue.

 

“The OMS architecture used in Have Raider II made it possible to rapidly insert new software components into the system,” said Michael Coy, AFRL computer engineer. “OMS will allow the Air Force maximum flexibility in the development and fielding of cutting edge autonomous capabilities.”

 

The AlphaDogfight competition — a series of trials testing manned-unmanned teaming capabilities run by the Defense Advanced Research Projects Agency — called for an examination of Lockheed’s AI boundaries. During the competition, Lockheed limited AI control to follow Air Force doctrine, but the winner — Heron Systems, which was bought by software company Shield AI — was more flexible.

 

Building Shared Situation Awareness to Support Airman-Autonomy Teams

Shared situation awareness is needed to ensure that the autonomy and the airman are able to align their goals, track function allocation and re-allocation over time, communicate decisions and courses of action, and align their respective tasks to achieve coordinated actions.

 

Critical situation awareness requirements that communicate not just status information, but also comprehension and projections associated with the situation (the higher levels of situation awareness), must be built into future two-way communications between the airman and the autonomy.

 

Developing future autonomous systems that achieve this vision will require addressing many key technical challenges. Future autonomy will need to be able to more effectively process sensor data and airman inputs to create its own internal situation model to direct its decision making. By drawing on research on human situation awareness, a cognitively inspired architecture for autonomy situation models can provide significant gains for creating effective and robust autonomous systems.

 

As many approaches to autonomy are based on adaptive technologies and learning techniques, many new challenges will also be created, including new problems with supporting understandability of the autonomous system, the need to manage standardization among potentially varied systems that have learned different lessons, and successful methods for verification and validation of the autonomy. In addition, methods for creating resilience to cyber-attacks must be carefully considered throughout system development.

 

Dynamic Autonomy Selection

The airman will be able to make informed choices about where and when to invoke autonomy based on considerations of trust to the ability to verify its operations, the level of risk and risk mitigation available for a particular operation, the operational need for the autonomy, and the degree to which the system supports the needed partnership with the airman.

 

In certain limited cases the system may allow the autonomy to take over automatically from the airman, when timelines are very short for example, or when loss of lives are imminent. However, human decision making for the exercise of force with weapon systems is a fundamental requirement, in keeping with Department of Defense directives.

 

The development of autonomy that provides sufficient robustness, span of control, ease of interaction, and automation transparency is critical to achieving this vision. In addition, a high level of shared situation awareness between the airman and the autonomy will be critical.

 

AI Vs AI

One area for industry to navigate alongside the Air Force is how it will face other artificial intelligence-based systems, he said during a panel discussion at the conference. That challenge could shape the ethical limits of autonomous systems.

The ADAIR-UX program — which is developing an AI-piloted aircraft with General Atomics for fighter jets to train against — will build awareness about the difficulty of facing AI as students at weapons schools practice against adversaries with lightning-fast decision making, he said.

“I think that will be maybe the Sputnik moment of cultural change, where we realize when we saw … F-22 and F-35s in the range, how challenging it is to go against that,” he said during a panel at the conference.

A new advancement in autonomous capabilities with potential for future AI-controlled aerial vehicles is reinforced learning, he said. Using algorithms, an operator can define the world that the machine is allowed to operate in and give it a set of actions. The machine can then self-learn all the possible combinations of those actions in the set environment.

This type of learning could be reassuring to those with concerns about AI, especially as the military begins to test its largest class of unmanned aerial vehicles, he said. Setting the limits of what the machine can do can be comforting, but it still allows the system to innovate, Atwood said.

“What we’re finding now in manned-unmanned teaming is the squadrons are ready to start accepting more degrees of freedom to the system — not just going in a circle, but maybe cueing mission systems, maybe doing electronic warfare [or] doing comms functionality,” he said.

 

Air Force is looking for resilient autonomous systems

The Information Directorate of the Air Force Research Laboratory (AFRL), Rome Research Site, is seeking innovative research proposals in the area of resilient distributed autonomous systems. The objective of this project is to develop and demonstrate improved resiliency of autonomous assets in a simulated Intelligence Surveillance and Reconnaissance (ISR) mission with degraded communications.

 

Resiliency in the context of this project is defined as a system’s ability to operate at an acceptable level of normalcy despite disturbances. This definition encompasses the system’s agility, adaptive capacity and insufficiency, and robustness. A robust autonomous system can provide increased resiliency with capabilities to assess, recover, learn, and adapt from adverse events, both benign and hostile. These capabilities will allow deployed systems in contested domains, such as Anti-Access Area Denial (A2/AD) environments, to complete mission objectives with minimal impact.

 

The Air Force wants to effectively plan, coordinate and control a finite number of heterogeneous autonomous unmanned aerial systems to achieve pre-specified ISR mission objectives. The challenge then, is that adversaries will be aware of intended operations within defenses and thus will employ electronic warfare and integrated air defenses to disrupt communications and capabilities. Technologies must provide effective command and control of at least 100 fully autonomous assets within intermittent and degraded communications environments, as well as prove capable of learning from prior experiences, as reported by Mark Pomerleau .

 

In order to assess the capabilities of technologies presented in A2/AD environments, solutions will participate in simulations in which opposing forces, known as red teams developed by the Air Force and not shared with contractors ahead of time, will attempt to disrupt the specific intelligence, surveillance and reconnaissance missions assigned to blue or friendly teams over a specific area within a 72-hour window.

 

Proposed research should investigate innovative approaches that enable increased resiliency through and ability to learn and adapt over repeated engagements. The following is a non-comprehensive list of technologies that can potentially contribute to the desired solution.

– Distributed Planning and Constraint Optimization
– Multi-Agent Coordination
– Distributed Information Management
– Representation and Feature Learning
– Reinforcement Learning
– Transfer Learning
– Case Based Reasoning
– Game Theory and Opponent Modeling

The focus of the efforts for this project will be on the advancement of machine intelligence to accomplish the project goals.

U.S. Air Force Research Laboratory  launched the Soaring Otter program in  November 2020  to develop fast and efficient ways to move enabling technologies for machine autonomy from the laboratory to flight testing.

The Air Force (AF) is increasingly employing the science of autonomy to solve complex problems related to global persistent awareness, resilient information sharing, and rapid decision making. Current autonomy approaches include a growing spectrum of techniques ranging
from the well-understood to the truly novel: Machine Learning (ML), Artificial Intelligence (AI), many varieties of Neural Networks, Neuromorphic Computing, Data Exploitation and others. Together with the rapid progress of autonomy algorithms and methodologies is the equally rapid progress of hardware and software designed to support the efficient execution of autonomy.

 

These computing solutions bring new capabilities, but also new challenges, including how best to develop applications with them, and
how to integrate them into larger systems. The application space for autonomy is rapidly growing, with critical technologies like target identification and recognition, Positioning, Navigation and Timing (PNT) and UAS route planning. Finally, how best to integrate and test these new solutions within reasonable constraints of cost and risk is still not well understood, and there is need for a well-defined progression from lab prototype, through realistic System Integration Lab (SIL) testing, finally through field and flight testing for Technology Readiness Level (TRL) increase.

 

SOARING OTTER will be a one step, Closed BAA to advance, evaluate and mature Air Force autonomy capabilities, leveraging the latest advancements in both the fundamental science of autonomy and ML and the most modern computing technologies designed to support them. The scope includes the following six main topic areas:

 

Autonomy Development and Testing: Develop novel approaches to solving autonomy problems using the latest techniques in ML, neural networks, AI and other fields. Constantly seek to leverage the newest developments from both government and industry; mature existing approaches toward greater levels of robustness and determine early what is required for the eventual successful transition of these
autonomy technologies to the warfighter.

Evaluation of Autonomy Capabilities: Provide neutral 3rd party evaluation of algorithms from Government, Academia and Industry. Provide unbiased analysis of alternatives for algorithms being produced by the Government, Industry and Academia to provide actionable information to AFRL about which algorithms are performing best against objective criteria, as well as determine which solutions are
most ready for maturation and integration into systems. Design and perform trade studies to identify best-of-breed solutions and make recommendations to the Government for their application and further maturation.

 

Novel Computing Approaches: This area will focus on compact computing solutions that push processing to the edge for real or near real time solutions to support the warfighter. Assess the latest emerging computing architectures from government and industry, together with the latest approaches to efficiently developing applications using these technologies.

New Application Spaces: Evaluate emerging Air Force priorities and user requirements, to determine where autonomy can bring the greatest benefit, focusing on ISR. Open System Architectures for Autonomy: Assess existing and emerging Open System Architectures (OSAs) as fundamental elements of future autonomous systems.

 

Autonomy Technology Integration and Testing: Plan and execute paths by which new autonomy technologies can be rapidly integrated into larger systems for lab, SIL and field/flight testing.

Maturing System Support: Plan and execute technology transition and system transition activities for operational partners. System deployment support and participation, system integration, testing, and assessment support activities.

Additionally, this BAA will address the issues both individually and collectively. No R&D conducted under this program will be done in isolation, but rather in full consideration of how the new technologies can progress toward full integration with large, complex systems, ready to transition to support of the warfighter.

 

Conclusion

Many Air Force systems will experience an evolution towards increasing levels of autonomy over the next several decades. These advances will only be successful in achieving their goals of increased range and speed of operations, increased mission capabilities, increased reliability, persistence and resilience, or reduced manning loads if they take careful consideration of the need for effective airman-autonomy teaming. Past paradigms that created brittle automation, with limited capabilities and limited consideration of human operators, will be replaced by an explicit focus on synergistic airman-autonomy teams.

 

This new paradigm will directly support high levels of shared situation awareness between the airman and the autonomy, creating situationally relevant informed trust, ease of interaction and control, and the manageable workload levels needed for mission success. By focusing on airman-autonomy teaming, the Air Force will create successful systems that get the best benefits of autonomous software along with the innovation of empowered airmen.

 

Looking into the future, technologies under development today at DARPA and AFRL will form adaptive kill webs in which  autonomous aircraft flying in collaboration with manned aircraft could receive inputs from a range of actors. In one instance, a pilot of a manned aircraft provides an input. If that individual is overloaded with tasks, or has lost linkage, is shot down, or is otherwise unavailable, control could then transfer to an air battle manager on an aircraft such as an E-3 Airborne Warning and Control System (AWACS), E-8 Joint Surveillance and Target Attack Radar System (JSTARS), or even a ground control station. If all forms of
communication are lost and the unmanned asset cannot execute its assigned mission in a wholly autonomous fashion, it would revert to a failsafe set of instructions.

 

“While difficult to quantify, the study concluded that autonomy — fueled by advances in artificial intelligence — has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike,” study co-chairs Ruth David and Paul Nielsen wrote. “The  Defense Science Board summer study report on autonomy therefore concluded that DoD must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.”

 

References and resources also include:

http://www.c4isrnet.com/articles/dod-releases-autonomy-study

https://defensesystems.com/articles/2016/02/24/air-force-uas-contested-environments.aspx

http://news.lockheedmartin.com/2017-04-10-U-S-Air-Force-Lockheed-Martin-Demonstrate-Manned-Unmanned-Teaming

https://www.nationaldefensemagazine.org/articles/2022/10/27/industry-prepping-ai-tech-for-next-gen-aircraft

https://warriormaven.com/air/air-war-2050-air-force-research-lab-heather-pringle-artificial-intelligence

About Rajesh Uppal

Check Also

DARPA Veloci-RapTOR: Redefining Velocity Measurement with Force Sensors

For decades, measuring velocity has relied on external references like GPS or lasers. But what …

error: Content is protected !!