Russia and China employ formations and capabilities (lethal and nonlethal) that overmatch those of the U.S. in range and lethality, thus challenging the Army’s ability to conduct operational maneuver, gain positions of relative advantage, and generate close-combat overmatch.
China’s and Russia’s rate of technological optimization will continue to outpace most countries and each will make significant advances in the integration of space, cyberspace operations, electromagnetic warfare, robotics, hypersonic missiles, and optimization of information technologies into their operations, says Army report.
At the operational level, Russia, China, and other adversaries depend on layered stand-off as the core of their A2/AD system. This system consists of two key components: accurate long-range fires and a comprehensive IADS network. Long-range fires systems can provide accurate, massed fires out to thousands of kilometers, and the best are digitized with access to near-real-time intelligence. Adversary IADS networks provide a protection umbrella consisting of close protection for tactical systems that are integrated with long-range systems with coverage of several hundred kilometers. These systems are then layered with several redundancies designed to defeat, or break, U.S. kill chains.
The Army Futures Command Concept for Maneuver in Multi-Domain Operations, 2028 describes how the Army will maneuver in large-scale combat operations on the MDO battlefield. Maneuver is the employment of forces in combination with lethal and nonlethal effects across multiple domains, the EMS, and the IE to achieve a position of relative advantage, destroy or defeat adversary forces, control land areas and resources, and protect populations.
These challenges demand a return of operational-level warfighting to ensure the Army is able to support joint force objectives in competition and, if necessary, in conflict. Against a peer adversary, the Army will require four echelons to conduct maneuver in multi-domain operations: theater army, an operational-level headquarters, corps, and division. All of these echelons will be in contact simultaneously and must synchronize their fights across echelons as they engage in a continuous cycle of penetration, disintegration, and exploitation conducted throughout the depth and breadth of the battlefield.
Each of these echelons will concentrate warfighting functions on a designated aspect of the fight, freeing the others to concentrate on their portion. The result of this concentration will be the defeat of the adversary’s layered stand-off (antiaccess (A2) and area denial (AD)) methodology allowing Army forces to maneuver from operational distances and bringing the full power of the joint force to bear on the adversary.
To enable Army forces to compete, deploy, and win as a component of major military campaigns against peer adversaries demands a future force based on multi-domain formations designed to integrate functions and effects across the depth of the area of operations. Future formations must be capable of coordinating multi-domain collection and targeting activities across echelons of command at a pace and tempo that exceeds the adversary’s capability to effectively respond.
The U.S. Army is developing key technologies together designed to fight across air, land, sea, space, and cyber. In 2020, there are three key technologies that when paired together in novel ways can provide a strong advantage against possible conflict with near-peer adversaries, according to Army Futures Command Commander Gen. Mike Murray: artificial intelligence, autonomy and robotics in the air and on the ground.
U.S. Army carried out a series of live-fire engagements of Project Convergence 2020 in Sept. 2020 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds. The Army was able to use a chain of artificial intelligence, software platforms, and autonomous systems to take sensor data from all domains, transform it into targeting information and select the best weapon system to respond to any given threat in just seconds.
The service is particularly focused on three key phases of MDO at Project Convergence:
- Penetrating and neutralizing enemy long-range systems, contesting enemy maneuver forces from operational and strategic distances
- Disintegrating the enemy’s anti-access and area denial (A2AD) systems taking out enemy long- and short-range systems while conducting independent maneuver and deception operations
- Exploiting freedom to maneuver to defeat enemy objectives and forces.
“Convergence is one of the tenets,” Murray said. “The ability to converge effects across all five warfighting domains (air, land, sea, cyber and space) and we’re really taking that tenant and putting it together in the dirt live and bringing multiple things together… and the key thing is here is being able to act faster than any opponent in the future.”
Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline — the time it takes from when sensor data is collected to when a weapon system is ordered to engaged — from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where it’s collected and its destination.
“We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.”
The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.
Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although it’s targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.
From there, the targeting data was delivered to a Tactical Assault Kit — a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed. All of that takes place in just seconds.
FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.
Once the Army has its target, it needs to determine the best response. FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.Simply put it’s a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield,” said Coffman.
The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the system’s recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.
It ingests data from sensors and other systems, uses One World Terrain to map the battlefield and recommends the best weapon system to engage specific targets, saving commanders precious time for making decisions. Prior technologies took almost 20 minutes to relay data back to warfighters. FIRESTORM takes 32 seconds.
Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems aren’t redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.
In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithm’s choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.
Last year, FIRESTORM had only one so-called decision node. “This year (2021) , we’re going to have a lot more decision-aid nodes … to validate the decision tree and whether it is making the decision based on a decision tree, which has many, many factors. You need to look at all of those factors to see if it provided the right recommendation,” says Ketula Patel, FIRESTORM program manager and Intelligence Systems branch chief with the U.S. Army Combat Capabilities Development Command Armament Center, Picatinny Arsenal.
Future enhancements likely will include even more advanced AI and automation algorithms. Some eventual enhancements also will be tailored to the needs of commanders. Feedback from commanders will help researchers refine some capabilities, such as targeting, predicting air clearance processes and deconflicting air space.
MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.
According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.
The Army, is also using available Air-Launched Effects (ALE) as well as a surrogate system called Titan — at Joint Base Lewis McChord in Washington State — that will process targeting information from ground and air autonomous vehicles using artificial intelligence. Titan is managed by the Army’s Multidomain Task Force’s Intelligence, Information, Cyber, Electronic Warfare and Space (I2CEWS) battalion. The system will pass targeting information to a fire control element sitting at Yuma.
The service is also experimenting with space sensing capabilities this year and bringing it all together using new network architecture. “The network is a huge piece of this and so we are building out mesh networks, communications between the air in terms of Gray Eagle [unmanned aircraft system] and ALE and [Future Vertical Lift] surrogates to the ground.”
2021 Project Convegence
For 2021 Project Convergence, the system also will be integrated with about 20 other systems and will support joint missions. That includes the Air Battle Management System (ABMS), an Air Force solution for the Joint All-Domain Command and Control Concept. ABMS allows a joint force to use cutting-edge methods and technologies to rapidly collect, analyze and share information and make decisions in real time, according to an Air Force press release.
It also includes the Army’s Air and Missile Defense Workstation, a staff planning and battlespace situational awareness tool. It provides the user with an air defense picture and supports the Surface-Launched Advanced Medium Range Air-to-Air Missile air defense system by providing an automated defense planning capability for deployed units.
“There’s a lot of integration work that’s happening to make sure we can receive data from all kinds of systems that are fielded—or emerging technologies—and have interoperability with them,” Patel reports. “We’re probably integrating to, I’d say, about 20 technologies for Project Convergence. We’re extending our interfaces to work with a lot of those different, newer platforms.”
She notes that FIRESTORM can work for commanders at higher echelons, including joint task forces, or at the tactical level for individual tank or helicopter crews. “The platforms, such as the ground tank commander, could utilize the system all the way to the joint task force. So, we’re also integrating with the Abrams since Abrams is at the tactical edge.”
The system will benefit overwhelmed tank commanders. “It’s not going to get involved with a direct fire mission. If a commander sees a target, he’ll continue to engage as he sees is the best for him to minimize any kind of fratricide and also a self-defense kind of scenario,” Patel explains. “Where it helps is if that specific tank platoon is getting an overwhelming number of targets. FIRESTORM will continue to communicate with all the other inorganic assets that could support those fire missions.” For example, FIRESTORM could alert the commander of unseen dangers. “Let’s say a target that came from a higher echelon, or an intel system saw a target that was beyond line of sight of that platform, that could be alerted to that commander,” Patel adds.
FIRESTORM already has been partially integrated with the Army’s Tactical Assault Kit and Nett Warrior and also is available on Linux-based laptops. “The Android capability, I would say, is not as mature from a decision-aiding and algorithm perspective. It’s really mainly on the Linux laptop,” Patel says. “We will continue to develop other technologies, such as Microservices, to have it working with cloud and all the other newer, modernized architectures that the joint partners, as well as the Army, are developing.”
While reducing the decision-making time from 20 minutes to 32 seconds is impressive, Patel stresses optimization as another crucial benefit. “We’re optimizing the target assets that we want to utilize and not just using any available asset that could be the shooter. We’re looking for the best shooter and the best effects for any target,” she says.
That benefit cannot yet be quantified. “Right now, we don’t have quantified data. We will as we build out use cases and run out simulations in fiscal year 2022. We will scale it up where we will ingest a lot of targeting data and utilize what might be all the effects and see which ones were selected and come up with a cost metric. I don’t mean cost from a money perspective but from an optimization perspective,” Patel explains.