Home / Military / Army / DARPA’s Squad X revolutionising infantry squads by integrating Unmanned air and ground vehicles, new technologies for Precision engagement, Electronic Warfare, Situational Awareness and Autonomy

DARPA’s Squad X revolutionising infantry squads by integrating Unmanned air and ground vehicles, new technologies for Precision engagement, Electronic Warfare, Situational Awareness and Autonomy

Modern military engagements increasingly take place in complex and uncertain battlefield conditions where attacks can come from multiple directions at once, and in the electromagnetic spectrum and cyber domains as well. U.S. Army and U.S. Marine Corps dismounted infantry squads, however, have been unable to take full advantage of some highly effective multi-domain defensive and offensive capabilities that vehicle-assigned forces currently enjoy—in large part because many of the relevant technologies are too heavy and cumbersome for individual Soldiers and Marines to carry or too difficult to use under demanding field conditions, says American military forum.


To help overcome these challenges and help ensure U.S. squad dominance over adversaries in the decades to come, DARPA  launched the Squad X Experimentation (Squad X) program in 2016.  “Through Squad X, we want to vastly improve dismounted squad effectiveness in all domains by integrating new and existing technologies into systems that squads can bring with them,” said Maj. Christopher Orlowski, DARPA program manager.


Squad X Program

Under the SXCT (SXCT) program, DARPA envisions future supersoldiers using advanced technologies, such as augmented reality (AR), to intuitively understand and control their complex mission environments and enable them to combat advanced adversaries all across the globe.


The technologies aim to increase squads’ situational awareness and lethality, allowing enemy engagement with greater tempo and from longer ranges. DARPA goal is to have realtime coordination of a squad with its flying and ground based drones and to have all relavant sensor data being presented in a useful way. DARPA wants to use the sensors and information to help the squad cut through the fog of war to be able to move quickly and effectively eliminate threats within 1000 meters.


Squad X intends to combine off-the-shelf technologies and new capabilities under development through DARPA’s Squad X Core Technologies (SXCT) program, which was launched specifically to develop novel technologies that Squad X could integrate into user-friendly systems. SXCT shares Squad X’s overarching goal of ensuring that Soldiers and Marines maintain uncontested tactical superiority over potential adversaries by exploring capabilities in four areas: precision engagement, non-kinetic engagement, squad sensing and squad autonomy. DARPA wants to design, develop, and validate system prototypes for a combined-arms squad. Overmatch adversaries through the synchronization of fire and maneuver in the physical, electromagnetic spectrum, and cyberspace domains.


Squad X seeks to design, develop and validate system prototypes for combined-arms squads. The program intends to lay the foundation for breakthrough technologies and capabilities that would:

  • Improve shared situational understanding of the multi-domain operational environment: physical, electromagnetic and cyber
  • Increase the time and space in which squads can maneuver through optimized use of physical, cognitive and material resources
  • Shape and dominate the battlespace through synchronization of fire and maneuvering in all three domains


“The squad is the formation with the greatest potential for impact and innovation, while having the lowest barrier to entry for experimentation and system development. The lessons we learn and the technology we create could not only transform dismounted squads’ capabilities, but also eventually help all warfighters more intuitively understand and control their complex mission environments.”


When first conceived in 2016, the Defense Advanced Research Projects Agency’s (DARPA’s) Squad X program was expected to explore four technology areas—precision engagement, nonkinetic engagement, squad sensing and squad autonomy—on behalf of dismounted soldiers and Marines in a squad formation. Since then, the program has evolved to focus on small units at multiple echelons, such as squads, platoons and Special Operations teams.


SQuad X Core Technologies  Improving Capabilities for Dismounted Soldiers and Marines

DARPA’s Squad X seeks to enable the humans in the squad to be increasingly effective in future complex and uncertain environments. Dismounted infantry squads, have so far been unable to take full advantage of some of these highly effective capabilities because many of the technologies underlying them are too heavy and cumbersome for individual Soldiers and Marines to carry or too difficult to use under demanding field conditions.


To address this application lag, DARPA’s Squad X Core Technologies (SXCT) program aims to develop novel technologies that could be integrated into user-friendly systems that would extend squad awareness and engagement capabilities without imposing physical and cognitive burdens. The program, whose overarching goal is to ensure that Soldiers and Marines maintain uncontested tactical superiority over potential adversaries, recently awarded Phase 1 contracts to nine organizations.


“Our goal is to develop technologies that support a three-dimensional common operating picture leveraging input from integrated mobile sensors, as well as the ability to organically locate and identify friendly forces and threat locations in near real time,” said Maj. Christopher Orlowski, DARPA program manager.


“The Phase 1 performers for SXCT have proposed a variety of technologies that, in the future, could provide unprecedented awareness, adaptability and flexibility to dismounted Soldiers and Marines and enable squad members to more intuitively understand and control their complex mission environments.” DARPA has selected nine organizations to start developing novel technologies that could enhance squads’ ability to collaborate, understand their surroundings and act effectively.


SXCT  pursued research in the following four technical areas:

  • Precision Engagement:

Precisely engage threats out to 0.6 mile (1,000 meters), while maintaining compatibility with infantry weapon systems and without imposing weight or operational burdens that would negatively affect mission effectiveness. Capabilities of interest include distributed, non-line-of-sight targeting and guided munitions.

  • Non-Kinetic Engagement:

Enable the rifle squad to disrupt enemy command and control, communications, and use of unmanned assets to ranges in excess of 300 meters at a squad-relevant operational pace (walking with occasional bursts of speed). Capabilities of interest include disaggregated electronic surveillance and coordinated effects from distributed platforms.

  • Squad Sensing:

Enable the rifle squad to detect line of sight and non-line of sight threats from 1 to 1000 meters at a squad-relevant operational pace.  Capabilities of interest include multi-source data fusion and autonomous threat detection.

  • Squad Autonomy:

Increase squad members’ real-time knowledge of their own and teammates’ locations to less than 20 feet (6 meters) in GPS-denied environments through collaboration with embedded unmanned air and ground systems. Capabilities of interest include robust collaboration between humans and unmanned systems.




DARPA had awarded Lockheed Martin a $10.6 million contract to start work on Phase II of the Squad X Core Technologies program, which is intended to deliver the kind of situational awareness available to command posts to smaller deployed units.


The program has developed two technologies: Lockheed Martin’s Augmented Spectral Situational Awareness and Unaided Localization for Transformative Squads (ASSAULTS) system and CACI’s family of radios known as BITS Electronic Attack Module (BEAM). The first was supposed to use autonomous robots with sensor systems to detect enemy locations, allowing small units to engage and target enemy forces without being detected first. It, too, has evolved and is now a testbed for assessing artificial intelligence (AI) technologies.


 Collaborative Navigation Techniques

Leidos is developing state-of-the-art advancements in GPS-denied collaborative navigation with its Georegistration and Ranging for Accurate Intra-Squad Localization (GRAIL). The goal of GRAIL is to demonstrate high-accuracy positioning and navigation capability in GPS-denied conditions for squads comprised of dismounted warfighters, unmanned ground vehicles (UGVs), and unmanned aerial vehicles (UAVs). GRAIL incorporates advancements in individual unit positioning and inter-unit ranging to form a unified squad collaborative navigation solution.


As part of the Defense Advanced Research Projects Agency (DARPA)’s Squad X Core Technologies program, the GRAIL development goal is to achieve six-meter absolute individual and collective position accuracy in GPS-denied environments, while limiting the size, weight, and power (SWaP) burden of any additional equipment on the warfighter. In order to achieve this accuracy, dismounted warfighters collaborate with each other as well as unmanned systems moving in squad formations. Positioning accuracy in the collaborative framework is dependent on multiple factors including squad size, unit spacing, and unit formation.



Methodology and Key Innovations

Positioning in GRAIL consists of two main components – individual unit positioning using platform-dependent sensor fusion, and collaborative positioning using relative ranging measurements. All units use a UBlox GPS receiver for initialization and truth position reference, and Time Domain PulsON ultra-wide-band radios for intra-squad two-way ranging.


The dismounted warfighter sensor package includes a LORD Microstrain MEMS inertial measurement unit (IMU) with a three-axis magnetometer and barometric altimeter and a PointGrey monocular camera. The UGV chosen in Phase I was a Segway RMP440, and its sensor package includes a KVH tactical grade IMU with three-axis magnetometer, PointGrey stereo and monocular cameras, a Yocto barometric altimeter, and platform-integrated wheel odometry.


The UAV chosen in Phase I was a SteadiDrone Vader quadcopter, and its sensor package includes the same IMU and monocular camera as the warfighter package, a PX4FLOW camera, and a TruSense S200 laser altimeter. All individual units perform sensor fusion using the same software package, Leidos’ Dynamically Reconfigurable Particle Filter (DRPF). The DRPF can model a subset of nonlinear states using a particle filter representation, while efficiently handling linear states using a traditional extended Kalman filter formulation. The DRPF handles sensor inputs using generic interface control documents (ICDs) developed under the All Source Position and Navigation (ASPN) effort in DARPA’s Adaptable Navigation Systems (ANS) program.


The GRAIL collaborative navigation solution is computed using Leidos’ Multi-Agent Non-Gaussian Optimization (MANGO). MANGO incorporates each unit’s individual position estimate and uncertainty along with all available intra-squad relative range measurements and performs nonlinear optimization in order to generate an improved squad position estimate. Multimodal individual unit position estimates, which can form as a result of non-Gaussian sensor measurements such as ambiguous georegistration matches, can be handled through parallel iterations of the optimizer for each high-likelihood modal unit position.


The unmanned systems in the squad provide key advantages which improve their individual unit position accuracy, which in turn greatly reduce warfighter position error when the collaborative solution is computed. In particular, the UGVs higher-quality IMU provides reduced relative error drift relative to the other squad units, while the UAV can provide absolute positioning updates through aerial imagery georegistration. When allowable based on squad tactics, the unmanned assets can be placed in optimal locations where the resultant squad geometry – coupled with the lower individual unmanned system position error – maximizes the performance gains from collaborative navigation.



Leidos conducted both simulation and field tests in Phase I of the program. Simulations were used to analyze system collaborative navigation performance under a variety of conditions including varying squad size, composition, spacing, and formation. Field testing was conducted using a limited squad consisting of five warfighters, one UGV, one UAV, and optionally a static deployable ranging node.


Field tests were conducted in a variety of environments including a suburban office park, an outdoor facility with several warehouse buildings, and a rural farm. Multiple squad formations were tested, based on the fire team and squad formations provided in Field Manual 3-21.8 including the wedge, column, line, and box. Field tests with the limited squad size showed that the collaborative solution typically reduced warfighter position error by 60 percent or more relative to the warfighter standalone solutions. Absolute position errors varied based on formation, but collaborative warfighter accuracy in a variety of tests was on the order of 10 m or better. This accuracy matched expected values based on simulations mimicking the typical spacing and formations with the reduced squad size. Simulation of a full nine- or thirteen-warfighter squad showed that 6 m accuracy should be achievable for a wide variety of squad conditions, with the column formation yielding the most limited estimated spacing envelope allowable to maintain 6 m accuracy


Squad X Experiments

The first test of DARPA’s Squad X Experimentation program successfully demonstrated the ability to extend and enhance the situational awareness of small, dismounted units. In a recent field test, the program worked with U.S. Marines at the Air Ground Combat Center in Twentynine Palms, California, to track progress on two complementary systems that allow infantry squads to collaborate with AI and autonomous systems to make better decisions in complex, time-critical combat situations. In a weeklong test series, U.S. Marine squads improved their ability to synchronize maneuvers, employing autonomous air and ground vehicles to detect threats from multiple domains – physical, electromagnetic, and cyber – providing critical intelligence as the squad moved through scenarios.


The Squad X program manager in DARPA’s Tactical Technology Office, Lt. Col. Phil Root (U.S. Army), said Experiment 1 demonstrated the ability for the squad to communicate and collaborate, even while “dancing on the edge of connectivity.” The squad members involved in the test runs praised the streamlined tools, which allowed them to take advantage of capabilities that previously had been too heavy or cumbersome for individual Soldiers and Marines to use in demanding field conditions.


“Each run, they learned a bit more on the systems and how they could support the operation,” said Root, who is also program manager for Squad X Core Technologies. “By the end, they were using the unmanned ground and aerial systems to maximize the squad’s combat power and allow a squad to complete a mission that normally would take a platoon to execute.”


Both Lockheed Martin Missiles and Fire Control, and CACI’s BIT Systems are working for ways to enhance infantry capabilities using manned-unmanned teaming, according to the release. Two performers, each are working on different approaches to provide unique capabilities to enhance ground infantries. Manned-unmanned teaming is critical to both companies’ solutions.


The exercises in early 2019 in Twentynine Palms followed experiments in 2018 with CACI’s BITS Electronic Attack Module (BEAM) Squad System (BSS) and Lockheed Martin’s Augmented Spectral Situational Awareness and Unaided Localization for Transformative Squads (ASSAULTS) system. The two systems, though discrete, focus on manned-unmanned teaming to enhance capabilities for ground units, giving small squads battalion-level insights and intelligence.


Marines testing Lockheed Martin’s Augmented Spectral Situational Awareness, and Unaided Localization for Transformative Squads (ASSAULTS) system used autonomous robots with sensor systems to detect enemy locations, allowing the Marines to engage and target the enemy with a precision 40mm grenade before the enemy could detect their movement. Small units using CACI’s BITS Electronic Attack Module (BEAM) were able to detect, locate, and attack specific threats in the radio frequency and cyber domains.


Between Lockheed Martin’s two experiments to date, Root says the program-performer team identified a “steady evolution of tactics” made possible with the addition of an autonomous squad member. They also are focused on ensuring the ground, air, and cyber assets are always exploring and making the most of the current situation, exhibiting the same bias toward action required of the people they are supporting in the field.


In the most recent experiment, squads testing the Lockheed Martin system wore vests fitted with sensors and a distributed common world model moved through scenarios transiting between natural desert and mock city blocks. Autonomous ground and aerial systems equipped with combinations of live and simulated electronic surveillance tools, ground radar, and camera-based sensing provided reconnaissance of areas ahead of the unit as well as flank security, surveying the perimeter and reporting to squad members’ handheld Android Tactical Assault Kits (ATAKs). Within a few screen taps, squad members accessed options to act on the systems’ findings or adjust the search areas.


CACI’s BEAM-based BSS comprises a network of warfighter and unmanned nodes. In the team’s third experiment, the Super Node, a sensor-laden optionally-manned, lightweight tactical all-terrain vehicle known as the powerhouse of the BEAM system, communicated with backpack nodes distributed around the experiment battlespace – mimicking the placement of dismounted squad members – along with an airborne BEAM on a Puma unmanned aerial system (UAS). The BSS provides situational awareness, detects of electronic emissions, and collaborates to geolocate signals of interest. AI synthesizes the information, eliminating the noise before providing the optimized information to the squad members via the handheld ATAK.


With the conclusion of third experiment, the CACI system is moving into Phase 2, which includes an updated system that can remain continuously operational for five or more hours. CACI’s BEAM system is already operational, and the Army has committed to continue its development at the completion of Squad X Phase 2. The Army is set to begin concurrent development of the Lockheed Martin ASSAULTS system in fiscal years 2019 and 2020, and then, independent of DARPA, in fiscal year 2021.



Researchers have learned some surprising lessons from the technologies developed under the Defense Department’s Squad X program, which will end this year. For example, artificial intelligence may not help warfighters make faster decisions, but it does provide a planning advantage over adversaries. Furthermore, when it comes to detecting and electronically attacking enemy signals, systems can make smart decisions without artificial intelligence, writes George I. Seffers, Director, Content Development and Executive Editor.


“What Lockheed has done is created an operating system that allows you to experiment with and plug and play with different components. I can’t tell you how hard that is,” says Philip Root, DARPA’s Squad X program manager. “Adding and subtracting AI components is even more difficult because they could contradict each other. They could fight amongst each other, these AI behaviors.”


The BEAM technology detects, locates and attacks specific threats in the radio frequency and cyber domains, including adversarial small unmanned air systems. Although the system is not enabled by AI, Root indicates the computer processing capabilities make it pretty smart. The radios communicate with one another to “find the best formation to grab the most information about the enemy,” he says.


Lockheed’s ASSAULTS system taught researchers some valuable lessons. One of the first lessons is that AI can learn a lot from warfighters. For instance, researchers could use the wisdom and experience of commanders to teach and train AI and robotic systems. “I didn’t see that coming, the thought that we could learn from squad leaders, company commanders, battalion commanders, regarding tactics and then use that to inform unmanned air systems and unmanned ground systems would be a completely different technical direction and one that we’re now beginning to explore,” Root says.


He adds that the experience with Lockheed Martin has taught him that the military may not want to attempt building AI systems that are better than humans. “That doesn’t mean we shouldn’t try to develop good AI. It just means that instead of trying to replace the wisdom and experience of the small unit commander, we should try to create AI that helps support the wisdom and experience of the most junior Marine on the team, and sometimes that junior Marine is a robot.”


Both Marines and Special Forces units seem to see the system as team member rather than tool, he asserts. “They would give the BEAM system the mission, and it would modify its behavior depending on the threat it saw and where they were in the mission, where it saw high-value targets,” he elaborates. “So, technically, it was not AI. It didn’t use the machine learning and deep learning necessary to have that technical term, but there are many aspects that reflected a form of intelligence.”


Additionally, researchers learned that AI systems do not necessarily allow warfighters to make decisions faster but may help them to plan more effectively. Root says his team collected data to test the hypothesis that AI-equipped friendly forces, known as blue forces, would make decisions more rapidly than the enemy, or red forces. “What we found is the opposite. What we found is that blue was able to plan in great depth with several decision points and courses of action. And red was not,” he explains. “Red was deciding really quickly but out of necessity. They were reacting. Blue was able to have superior situational awareness and then act with precision and real initiative to change the environment completely and dominate their local battlespace.”


Another unexpected lesson involves the process for collecting data to train AI systems. Normally, companies train systems using their own data or publicly available information. Root has concluded that AI systems designed for military use should be trained instead with military data. “Lockheed Martin helped me see that we need to consider different approaches for data curation, data stewardship and AI certification. When we collect this data, it likely should be owned by the department and provided to industry to train their systems,” he suggests.


The Defense Department, he points out, collects vast amounts of information in experimentation, training and operational environments. That information is more relevant for military AI or robotic systems than the data industry can easily access. Root says his team collected nearly a terabyte of data in its final experiment alone. “If we own the data, then every time we do an experiment or a training exercise, we will collect more data, steward it, and consider using that for additional training data. What that leads you to then is a place where the AI that a unit uses—whether that’s a squad, platoon, company or battalion … will learn and mature as the unit continues to train with it.”

Therefore, the department could rethink the process for testing and evaluating AI technologies. “We might need a data certification and a unit certification where the unit and the AI are certified at the same time, meaning that the unit is capable and effective using the AI, and the AI is providing valid feedback,” Root offers.


Former Secretary of Defense James Mattis, Root recalls, was keen on supporting close combat units and instituted the Close Combat Lethality Task Force. Mattis’ mantra was that a soldier or Marine needed to experience 20 gunfights through realistic training before engaging in combat.


The Squad X team experimented with battalions planning and executing their missions in simulation before passing mission orders to squads. Those squads would then plan and execute their missions in simulation before live training. “If I have battalions, companies, platoons and squads that could all train with the same mission-type orders and then implement those missions in simulation, that provides some value certainly. But then with the opportunity to jump into the training range and execute that same mission and do it live, we start seeing the ability to train much more quickly and comprehensively and have AIs that are multi-echelon,” Root says.


He cites one Squad X experiment with help from the department’s Test Resource Management Center that involved having AI mounted on drones identify Marines in different environments and wearing different types of camouflage uniforms. In some cases, the uniforms worked well and the Marines blended into the background.But Root’s point is that the service collected reams of relevant data in the process and could put that data to use for AI training. “We collected gigabytes of data. It would be really expensive for industry to recreate that each time. You would need access to Marine uniforms. You would need access to all the same environments. And there would be a lot of duplication if every vendor tried to do the same,” he explains. “We have the most training-relevant data in the government.”


The family of BEAM radios also demonstrated innovation. CACI documentation says the BEAM system surveys the environment to enable deployed units to counter small drones; cellular, digital, or analog push-to-talk radios; data links; wireless fidelity signals; and digital or analog video signals. BEAM can scale by operating in a cluster and can also operate autonomously to deliver distributed attacks and provide rapid, responsive force protection capability in hostile environments.


“There are some signals like threat unmanned aerial systems that have too wide a signal in terms of bandwidth for one node to be able to pick up and monitor,” Root explains. “So, CACI developed an ability—this is part of their unique capability—to link these separate software-defined radios together to monitor these wide signals and then perform a geolocation calculation to be able to triangulate into these signals.”


The system initially was designed to be small and lightweight so that it could be carried in a backpack, but CACI then developed a larger version for ground vehicles and another version for Aerovironment’s hand-launched drone known as Puma. The flexibility of the radios could protect ships, smaller boats that carry Marines from ship to shore, landing forces and fixed locations. The 31st Marine Expeditionary Unit in Okinawa, Japan, has been using the system. Special forces detachments also have experimented with the technology.


Furthermore, the system has been deployed to combat zones. “One of the reasons we know this works is that we sent this downrange to Afghanistan and Iraq and had great effects. Obviously, I can’t say much about it, but two thumbs up from the customers with whom we collaborated and supported,” Root reports.


Root describes the BEAM technology as very mature and says it could be adopted by the military services or other departments or agencies, such as border patrol units. While there is currently no planned transition path for the Lockheed Martin or CACI technologies, he is discussing both with multiple parties. The Lockheed Martin system will not be used in combat any time soon, Root indicates. Instead, the system will be used to experiment with AI. “It allows us to do some incredible experiments, and it really helps us understand what we need. It lets us collect a lot of data, and in the world of AI, data is king,” Root says. “The nation that collects the most tactical data has the greatest advantage, and the Lockheed solution undoubtedly lets us collect more data than any other experimental system that I’ve seen.” Root describes the pending end of the program as bittersweet and notes that others will judge DARPA’s work. “I don’t get to decide if we did enough. Some future Marine in harm’s way will decide if we did enough.”



References and Resources also include:




About Rajesh Uppal

Check Also

Securing the Spectrum: Unveiling the Advanced Dynamic Spectrum Reconnaissance (ADSR) in AI-Driven Electronic Warfare

Introduction: U.S. soldiers in Europe have recently conducted field tests of the Advanced Dynamic Spectrum …

error: Content is protected !!