Recent technological advancements have enabled the realization of swarm systems that can include large numbers of robots. Using local communication and distributed coordination, these robots can achieve complex global behaviors that can be utilized in a wide range of applications.
However, fully autonomous swarms that are free from human supervision are hard to realize due to technological impediments. Although artificial intelligence could surpass human intelligence in a number of applications, it is not expected to outperform general human intelligence in the near future; the success of fully autonomous swarm operations in dynamic and complex environments and in the absence of human oversight remains a vision for a reasonably distant future.
Therefore, the use of a human-in-the-loop model is still an important bridge to ensure the safety of operations, especially for critical and sensitive applications in medicine and the military. Human-swarm systems will remain the most feasible path, at least for the foreseeable future, for adopting swarm systems in real environments.
In human-swarm interaction (HSI), humans and swarms need to act as a team to optimize common mission objectives. The human and the swarm are assigned complementary roles with the aim of combining their skills efficiently and in a manner that achieves mission goals.
To realize effective interaction, humans and the swarm need to coordinate their actions throughout the mission to maintain acceptable levels of workload while ensuring that the tasks are performed effectively. This coordination can be assigned to the human or to a coordinating agent.
Adaptive autonomy has been attracting increasing interest in the literature of human-automation interaction (HAI) and human-robot interaction (HRI) as a flexible autonomy scheme that acknowledges the dynamic and uncertain nature of the interaction.
Adaptive Autonomy
A framework for adaptive autonomy in HSI brings together both human and swarm agents to optimize the performance of the overall system. The framework aims to achieve seamless and adaptive interaction between humans and the swarm to maximize mission objectives. This enhancement is attributed to its ability to reconcile conflicting requirements within the interaction (e.g., to make best use of the automation while ensuring that the human does not lose situational awareness or his/her level of engagement).
In adaptive autonomy, the functions required to achieve a mission are identified in advance. For example, if the mission is to drive a vehicle from its current location to a goal, the functions to achieve this mission could include the following: 1) an environment monitoring function, 2) a current car-state estimation function, 3) a hazard detection function, 4) a route planning function, 5) a vehicle dynamic function, and 6) a vehicle steering function
An artificial intelligence (AI) agent is responsible for the adaptive control that dynamically allocates these functions to the human and the autonomous vehicle(s) based on the current requirements of the task and the states and capabilities of its potential performers (humans and machines). Adaptive autonomy has demonstrated its ability to enhance the performance of the overall human-machine interaction and mission
The function of adaptive autonomy can be described by two questions: when and how. The when question is concerned with evaluating the current state of the overall system-of-systems to determine whether an adaptation is needed. The how question is concerned with generating new task assignments and corresponding user interface changes. Such a requirement comes with a few challenges that include determining how to dynamically adjust the level of autonomy of different players, how to strengthen mutual trust, and which mechanisms are required to facilitate situational awareness (SA) of the players. The adaptive AI agent needs to form its own contextual awareness in order to be able to decide when adaptation is needed. Such contextual awareness requires continuous assessment of the states of different components in the overall system.
One person controls a swarm of 130 robots
In total, the swarm operator directed 130 vehicles in the physical world, as well as 30 simulated drones operating in the virtual environment. These 30 virtual drones were integrated into the swarm’s planning and appeared as indistinguishable from the others in the program to the human operator, and to the rest of the swarm. As apparitions of pure code, tracked by the swarm AI, these virtual drones flew in formation with the physical drones, and maneuvered around as though they really existed in physical space.
The swarm, including uncrewed planes, quadcopters, and ground vehicles, scouted the mock buildings of the Cassidy Range Complex, creating and sharing information visible not just to the human operator but to other people on the same network. The exercise was part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program.
For the person directing the swarm, the entire array of robots appeared as a virtual reality strategy game, mapped onto the real world. With the headset on, the operator was able to assign the swarm missions and see what the swarm had already scouted, but they was not giving direct orders to individual drones. The swarm AI, receiving orders and processing sensor information, was the intermediary between human control and the movement of robots in physical space.
“The operator of our swarm really was interacting with things as a collective, not as individuals,” says Shane Clark, of Raytheon BBN, who was the company’s main lead for OFFSET. “We had done the work to establish the sort of baseline levels of autonomy to really support those many-to-one interactions in a natural way.”
British Army tests MESH breakthrough technology to control drones
Latvian unmanned aircraft system designer and manufacturer Atlas Aerospace announced in Sep 2022 that the British Army has tested working with drones in its MESH for the first time, and they called MESH a breakthrough in technology and flying standards. MESH allows one operator to control several drones at once from one remote control. The ATLAS system was utilized in the package where one operator controls four drones on a tablet with individual manual taskings.
LCol Kai Webb (Rifles) from ITDU, who operated the swarm, said: “This type of technology will be a massive help when rolled out to units.” Two scenarios were tested: providing 24-hour surveillance around a specific location and artificial intelligence communications with the systems to plan overwatch.
Dominic Ferrett, a lead UAS (Unmanned Aerial Systems) engineer with the UK’s Defence Equipment and Support’s Future Capability Group, said swarms would mean reduced operator burden with ground and air elements also set to be incorporated.
Arthur Dawe (SG), commanding officer of the Infantry Trials and Development Unit (ITDU), said: “This added scale and complexity, with each drone able to carry out a separate task. This is a real amplifier, adding capacity, force protection, intelligence, surveillance and reconnaissance capabilities. The intent going forward is to add a precision strike capability. This will not only assist in our targeting but in our strike capability, making us more lethal at range which will protect our very valuable forces and people”.
ATLAS is currently testing the use of 50 drones in MESH. Ivan Tolchinsky, CEO of ATLAS, says: “The next step is creating fully autonomous artificial intelligence systems. You won’t need to manage the system, but rather just create the mission for MESH.”
References and Resources also include:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8891141/
https://defence-blog.com/british-army-tests-mesh-breakthrough-technology-to-control-drones/
https://www.popsci.com/technology/drone-swarm-control-virtual-reality/