Today’s military forces encounter evolving national threats, economic constraints, and a changing operational environment with a complex, multifaceted, and uncertain security landscape across a range of political, military, state and non-state actors. Advancing and sustaining the U.S. Department of Defense’s (DoD) operational readiness for these threats require a portfolio of training capabilities that support a learning continuum from individual, staff, to collective training. This portfolio must create the training environment that prepares the total force to accomplish a diverse and complex set of missions that demand an ever-changing combination of military engagement, security cooperation and deterrence competencies.
The Synthetic Training Environment will resolve shortcomings of the Multiple Integrated Laser Engagement System (MILES) that has been used to support direct-fire force-on-force training since 1980. “MILES was a device used for training outside, but if I hide behind a bush, you can’t shoot through the bush, which is not realistic,” says Meggitt’s Shavers. “Or you had to use blanks or simulation, but now they want to know how to have more weapons and hiding behind something isn’t a block.”
The Army today cannot simulate realistic multi-domain operations training from soldier through brigade combat team at home station, Maneuver Combat Training Centers (MCTCs), or deployed locations in live training environments, which will be integrated into Synthetic Training Environment. MILES cannot replicate the ballistic trajectory of munitions, simulate a munition’s effect on impact or engage targets using indirect fire, such as artillery or mortars. As a result, only half of the small arms and munitions assigned to a light infantry platoon can be represented accurately in live force-on-force training. The same is true for 40 percent of brigade combat team weapons effects. “We’ve been dealing with deployable systems for some time, but with improved computational power and new COTS technologies, we can provide high fidelity software in a much smaller footprint and reduced cost,” says Lenny Genna, president of the military training sector at L-3 Harris Technologies in Arlington, Texas. “The technology provides the capability to do a lot, but in some cases, you want tactile feel that isn’t fully there yet.”
The last decade has brought about significant changes in the way personnel are trained, thanks to the development of simulated or synthetic training capabilities. Virtual reality (VR) and augmented reality (AR) – layering visuals over a real environment view – are exciting advances shaping the way military service men and women, from all of the armed forces, are trained. “Training using mobile apps, VR, and AR technology will eventually replace regular training techniques,” says Reddick, reflecting on how this type of training is changing the way we learn. “In VR and AR, you can accomplish any task with no risk to your body. We’re able to recreate almost any task or mission that is asked of us in the virtual world and provide multiple outcomes, feedback, and analytics for each and every motion the user does.”
The Army’s future training capability is the Synthetic Training Environment (STE). The Synthetic Training Environment will be a single, interconnected training system that provides a Common Synthetic Environment, in which units from squad through ASCC to train in the most appropriate domain – live, virtual, constructive, and gaming, or in all four simultaneously. This training capability will enable Army units and leaders to conduct realistic multi-echelon / multidomain combined arms maneuver and mission command training, increasing proficiency through repetition. Units can then master collective training tasks in the
live environment. According to the US Army, the STE is designed to provide a cognitive, collective, multi-echelon training and mission rehearsal capability for the operational, institutional, and self-development training domains. The environment will “keep pace with and adapt to the rapid development of technologies” as part of the Army’s ‘Big Six’ modernisation priorities and builds on the US Department of Defense’s annual spend of $14bn in this field.
“As technology progresses, we are noticing an increase in realism and interaction formats that allow users to experience training almost identically to how they would experience it in real life, with the advantage of being able to stop, pause and reset the training experience,” Reddick says, adding that the possibility to simulate and reset on a constant basis, for a nearly unlimited user base and low cost, is where much of the appetite for VR and AR comes from across both the private and public sectors. “The ability to interact with a training workflow, from start to finish, with the ability to reset that workflow instantly for the next user, or have a group of people play out the experience at the same time and track all their results, is where the value of computer training comes in”, he adds. “We can simulate any military experience and, with our analytics and unique eye tracking technology, even determine how it is effecting the users mentally and physically.”
The Common Synthetic Environment, targeted for initial operational capability by September 2021 and full operational capability by September 2023, will provide the software, applications, and services necessary to enable and support next generation systems, including the Reconfigurable Virtual Collective Trainer, Soldier/Squad Virtual Trainer, and Live Training Environment.
The Synthetic Training Environment is being designed to simulate not only weapons effects at all ranges, but also the feel of each weapon’s discharge, enabling warfighters to have confidence in their training and mission rehearsals on deployment, before entering combat. Combining live environment training with the Synthetic Training Environment ecosystem enables users to measure training goals against actual performance. “That’s a huge part of being able to collect that information and provide that information back to the soldier, not only objectively but also with their trainers so they have the objective and the subjective information together,” says Kevin Hellman, capabilities developer for the Synthetic Training Environment at the Army Combined Arms Center – Training (CAC-T) at Fort Leavenworth, Kan.
Common Synthetic Environment (CSE)
In March 2019, U.S. Army released the services’s Common Synthetic Environment (CSE) statement of need, which outlined the Synthetic Training Environment (STE) the Army sees as its future training capability. The Synthetic Training Environment enables tough, iterative, dynamic and realistic multi-echelon/combined arms maneuver, mission rehearsal and mission command collective training in support of multi-domain operations, the Statement reads. The training environment will provide units the repetitions necessary to accelerate individual through unit skill and collective task proficiency resulting in achieving and sustaining training readiness. It provides complex operational environment representations anytime and anywhere in the world. The Synthetic Training Environment will deliver collective training, accessible at the Point-of-Need (PoN) in the operational, self-development and institutional training domains.
The focus is one interconnected training capability that provides a Common Synthetic Environment that delivers a comprehensive, collective training and mission rehearsal capability, the statement continues. The Common Synthetic Environment is composed of three foundational capabilities: One World Terrain (OWT), Training Management Tool (TMT) and Training Simulation Software (TSS). The Common Synthetic Environment enables the convergence of the live, virtual, constructive and gaming environments into the Synthetic Training Environment.
The Common Synthetic Environment (CSE) is the unified simulation environment Units and Soldiers use for training. The CSE provides Soldiers and Units a realistic (e.g.,physics-based effects), digital representation of the dynamic OE and the military capabilities in the scenario; to support collective training from Squad through ASCC. Within the CSE, there are two conceptually different ways in which units in a virtual environment will need to interact with the STE:
Virtual Semi-Immersive User Interface and Hardware
Virtual Semi-Immersive interfaces are common ‘keyboard and mouse’ interfaces into a virtual threedimensional (3D) representation of a training environment. While commonly referred to as ‘keyboard and mouse’, it may include additional peripherals, such as controllers and joysticks to enhance training, but are typically not intended to provide a full ‘form, fit and function’ representation of training conditions. This form of low-overhead reconfigurable training enables the crew/team through Brigade Combat Team to interact with the Common Synthetic Environment (CSE) and a digital representation of the Mission Command Information System (MCIS) interfaces and platforms for all Warfighting Functions (WfF) and a dismounted Soldier capability.
The CSE is the unified simulation environment in which the training takes place. The interface will stimulate sight, sound and touch modalities. Sight allows the
Soldiers to see the CSE (both two-dimensional [2D] overhead and 3D 1st/3rd person views), sound allows the Soldier to hear and provide voice input into the CSE, and touch allows the Soldier to interact with the CSE. The quality of stimulation is a low fidelity approximation of what the Soldier experiences in the live environment.
Virtual Immersive User Interface and Hardware
Virtual Immersive trainers will seek a higher level of ‘form, fit, and function’ for the training audience than the semi-immersive systems. These interfaces into the CSE replace the immersive Combined Arms Tactical Trainers (CATT) found in the Army inventory. However, unlike the large overhead of current CATT trainers, the STE will need low overhead, reconfigurable, and transportable trainers to facilitate training anytime, anywhere. To accomplish this, the STE will require the use of innovative Mixed Reality and Natural User Interface technologies to deliver the following capabilities:
Capitalization on rapid advancements in commercial mixed-reality technologies
Low sustainment and concurrency costs
Scalable interfaces to support training, without disruption, at the PoN
Rapid concurrency updates driven through software rather than hardware changes
Immersive collective training experiences that support suspension of trainee disbelief
Accurate visual and haptic system representation (e.g., sensors, weapons, survivability capabilities, communications) to prevent negative training transfer or habit formation
Natural fields of view
The breadth of tactical trainers supporting Ground and Air Simulation
Ground: This reconfigurable and transportable trainer enables ground platform crew/team through Battalion Task Force to interact with the CSE and a digital representation of the MCIS interfaces and platforms for all WfFs. The immersive trainer provides a motion tracking capability and select highfidelity physical platform controls for crew members. The interface will stimulate sight, sound and touch modalities. Sight provides the Soldiers a natural field-of-view and allows the Soldiers to see the CSE from first person perspectives, sound allows the Soldier to hear and provide voice input into the CSE, and touch allows the Soldier to use physical and tactile controls of systems, subsystems, components, and mission command information system interfaces to interact with the CSE.
Key considerations for ground immersive training include:
Vehicle Commander: Weapon system control and sensor controls.
Driver capabilities: Steer vehicle, change gear (e.g., forward, reverse), accelerate vehicle, brake
vehicle, and control/view dashboard.
Gunner (combat vehicle): Weapon system control and sensor controls.
Loader: Loader’s periscope, loading main weapons systems, loader’s weapons systems, radios.
Gunner/Air Guard (wheeled vehicle): Grip, aim, fire, and reload weapon.
Air: This reconfigurable and transportable trainer enables aviation crew/team through Battalion Task Force to interact with a CSE, and a digital representation of the MCIS interfaces and platforms for all WfF. The immersive trainer provides a motion tracking capability and select high-fidelity physical platform controls for pilot, co-pilot, and non-rated crew members. The interface will stimulate sight, sound and touch modalities. An accurate representation of crew sensory inputs and feedback are critical. The relatively increased danger from crew error in aviation platforms necessitates an expectation of higher fidelity in Air immersive trainers.
Flight, weapon controls, and non-crewmember controls must provide highly accurate tactile control and switch options relative to the aircraft’s digital operational flight program (OFP) capabilities and be in the correct location relative to where the crew member is standing or sitting (e.g., collective is always on the left side, cyclic between the legs), to prevent negative training and habit transfer.
Pilot/Co-Pilot capabilities include dual flight controls to allow the pilot or co-pilot/gunner to fly the aircraft safely (Cyclic, Collective, Pedals). It also includes unique weapon systems interfaces (i.e., Target Acquisition and Display Sight (TADS) Electronic Display and Control (TEDAC) for Attack Helicopter [AH]). The TEDAC for the AH-64 is only for the co-pilot/gunner position. Non-Rated Crewmembers capabilities include unique weapon interfaces (i.e. door gun) for the Utility
Helicopter (UH) and Cargo Helicopter (CH); unique Intercommunications System (ICS) Switch and handheld push to talk capability (UH, CH); and unique hoist controls (UH and CH). Unique cargo hook view space (CH) hoist operations must provide a minimum level of tactile and visual feedback to ensure awareness of proper operations.
Unmanned Aircraft System (UAS) capabilities will include the realistic representation of unmanned systems, to include all kinetic and non-kinetic battlefield effects, as well as the appropriate affordances for user/operator interactions, in order to facilitate collective training.
Global Terrain/One World Terrain (OWT) Capability
The Global Terrain research effort is a demonstration of the global terrain capabilities needed to achieve the STE vision. This concept would ultimately include a cloud-based service that delivers a common synthetic representation of the whole Earth to include the air, land (includes subterranean), sea (includes undersea), space, and cyber domains that units will use for collective training. The STE’s Global Terrain will be delivered over the network to training audiences at home station, while deployed, and at the institution.
Global Terrain Capabilities include:
A digital global with all terrain available to include full 2D, 3D and parametric information on all the buildings/structures, to include interiors and subterranean features, on the planet.
Soldier-level fidelity of terrain available on a global scale.
Training without boundaries that allows seamless integration of physical training areas into global scale wrap around exercises in the virtual and constructive training domains.
Reuse and integration of a variety of data sources, from the reuse of existing training simulation terrain, such as Synthetic Environment – Core (SE-CORE) home station databases; to the importation of the Army’s Standard Shareable Geospatial Foundation (SSGF), the use open source data, to the collection and processing of organic terrain collection data, such as dronecaptured photogrammetry.
The ability to export 3D mesh-based terrain to 2D vector- and raster-based terrain systems. The Global Terrain Capability concept delivers a geographical representation of the entire 3D world in a geo-referenced ellipsoid representation of the Earth. The goal for data fidelity is to provide subcentimeter resolution and accuracy in terrain, to support full live-synthetic entity interaction in a ‘fairfight’ environment. OWT will need to provide the best available terrain representation, from geo-typical to geo-specific, based on authoritative data, while making use of innovative approaches in procedural terrain generation and sensor fusion to constantly improve the quality of the available global terrain.
Additionally, training units will need a capability that allows runtime editing of exercise specific environments to set conditions needed to meet training objectives. Configuring operational variables (Political, Military, Economic, Social, Information, Infrastructure, Physical Environment, and Time [PMESII-PT]) that represent the Operational Environment (OE) enables the CSE to represent unique OE complexities. This provides enhanced realism for a realistic training experience without artificial limitations.