A robot swarm typically consists of tiny, simple, indistinguishable robots that are each equipped with a sensor (and a camera, radar and/or sonar so they can gather information about their surrounding environment). When one robot collects and shares data with the others in the group, it allows the singular robots to function as a homogeneous group. A robot swarm can combine the knowledge and insights of millions of independent, self-sustaining agents (or boids) to form and converge on a single, unified decision.
In swarm robotics multiple robots collectively solve problems by forming advantageous structures and behaviors similar to the ones observed in natural systems, such as swarms of bees, birds, or fish. These micro-drones have demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing.
Ants, for example, can only perform a limited range of functions, but an ant colony can build bridges, create superhighways of food and information, wage war, and enslave other ant species—all of which are beyond the comprehension of any single ant. Likewise, schools of fish, flocks of birds, beehives, and other species exhibit behavior indicative of planning by a higher intelligence that doesn’t actually exist. It happens by a process called stigmergy. Simply put, a small change by a group member causes other members to behave differently, leading to a new pattern of behavior.
When an ant finds a food source, it marks the path with pheromones. This attracts other ants to that path, leads them to the food source, and prompts them to mark the same path with more pheromones. Over time, the most efficient route will become the superhighway, as the faster and easier a path is, the more ants will reach the food and the more pheromones will be on the path. Thus, it looks as if a more intelligent being chose the best path, but it emerged from the tiny, simple changes made by individuals. A swarm of robots would work on the same principles as an ant colony: each member has a simple set of rules to follow, leading to self-organization and self-sufficiency.
Military Swarms
Current drones like the MQ-9 Reaper are controlled remotely, with a pilot flying the aircraft and a payload operator aiming and launching missiles. A battery of other personnel, including military lawyers and image analysts, look over their shoulders and argue what is or is not a valid target. Future drones may have more autonomy, flying and fighting with much less human supervision, in particular when many of them work together as a swarm.
Drone swarms are multiple unmanned platforms and/or weapons deployed to accomplish a shared objective, with the platforms and/or weapons autonomously altering their behavior based on communication with one another. Drone swarms offer significant improvements to both nuclear offense, the ability to successfully deliver a warhead to a target, and defense, the ability to prevent successful delivery and mitigate consequences. Drone swarms are being employed to perform actions such as a survey of enemy territories, search and rescue missions, and attacks on hostile objects.
The fact that components of the swarm can communicate with one another makes the swarm collaborate with each other and to achieve a greater objective than from just a group of individual drones. Communication allows the swarm to adjust behavior in response to real-time information. Drones equipped with cameras and other environmental sensors (“sensor drones”) can identify potential targets, environmental hazards, or defenses and relay that information to the rest of the swarm. The swarm may then maneuver to avoid a hazard or defense, or allow the weapon-equipped drone (an “attack drone”) to strike the most vulnerable target or defense. The rollout of private 5G networks is also enabling robots to collaborate and adding to the explosion of robot swarms.
Militaries are developing swarms of robots and drones that could carry out complex missions cheaper and more efficiently. Some experts estimate at least 30 nations are actively developing drone swarms—and even submersible drones—for military missions, including intelligence gathering, missile defense, precision missile strikes, and enhanced communication. A swarm robotics system consists of autonomous robots with local sensing and communication capabilities, lacking centralized control or access to global information, situated in a possibly unknown environment performing a collective action. These characteristics lead to the main advantages of swarms: adaptability, robustness, and scalability.
One of the biggest advantages a swarm of drones has when performing military operations is its resiliency. If a swarm enters combat and several individual drones get shot down or otherwise incapacitated, it really doesn’t reduce the combat effectiveness of the swarm, nor the tactics that it uses. A swarm of 550 drones is just about as powerful and flexible as a swarm of 600, even if the former has “lost” almost 10% of its initial strength.
The U.S. military is testing out swarm operations in simulations, while the British Army is using live drones operating in swarms during actual training operations. Other militaries are also interested in deploying swarms.
Swarm Intelligence
Swarm intelligence is a branch of artificial intelligence that attempts to get computers and robots to mimic the highly efficient behavior of colony insects such as ants and bees. As a group, simple creatures following simple rules can display a surprising amount of complexity, efficiency, and even creativity. Across countless species, nature show us that social creatures, when working together as unified systems, can outperform the vast majority of individual members when solving problems and making decisions.
Swarm intelligence in the robotics domain has wide-ranging applications and benefits. The primary benefits of swarm intelligence include:
• Flexibility: The swarm system responds to internal disruptions and external challenges.
• Robustness: Tasks are finished regardless if some of the agents fail.
• Self-organizing: Roles are not predefined — they emerge.
• Flexibility: The swarm system responds to internal disruptions and external challenges.
• Robustness: Tasks are finished regardless if some of the agents fail.
• Self-organizing: Roles are not predefined — they emerge.
• Adaptation: The swarm can adapt to predetermined and new stimuli.
• Decentralized: There is no central control, allowing for rapid, local collaboration.
One main challenge in artificial swarming is the design of systems that, while maintaining decentralized control, have agents capable of (i) acquiring local information through sensing, (ii) communicating with at least some subset of agents, and (iii) making decisions based on the dynamically gathered sensed data.
Insect behavior, miniature blimps may unlock the key to military swarming technology
Researchers at the U.S. Naval Research Laboratory flew a fleet of 30 miniature autonomous blimps in unison to test the swarming behavior of autonomous systems. Don Sofge, lead for the distributed autonomous systems group at NRL, and his team are working to further research for autonomous super swarms. Their goal is to fly more than 100 controlled miniature blimps this year ( 2019).
He likens individual autonomous agents to ants in a colony. Ants perform actions often equated with the functions of a society, but they do not have a central control. The possibility of replicating individual behaviors in autonomous systems is of great interest to researchers.
“We are using these as platforms to demonstrate swarm behaviors,” Sofge said. “Behaviors are programmed into each agent individually. The idea is that each agent is making its own decisions, sensing the world around it so that the action of the group results in some desirable emergent behavior.”
“In order to get the swarm to do something useful, you have to think about how to program the individual,” he said. “What behaviors or algorithms are running on the individual agent? In nature, most colony or swarm systems have no centralized control. Each individual is basically interacting with its environment, but collectively they are able to do very interesting and useful things.”
“If you are working with a traditional centralized control architecture you have to deal with the challenges of communicating with 10,000 agents individually,” Sofge said. “You can’t assume everyone knows where everyone else is because they are only interacting locally based on what they sense and the decisions they are making and the actions they are taking locally.”
The NRL research team is also working to establish a seamless networking architecture. They are leveraging existing network architectures and protocols for large numbers of objects working together. Each object in a swarm is dynamic and its location is never fixed. The object may move in and out of the network, which makes overlaying a network architecture extremely difficult.
Autonomous objects in a swarm must deal with a challenge common in military environments: communication. The U.S. Department of Defense operates all over the world, from the chilling Arctic to hot tropical forests. Staying in communication with an agent despite inhospitable environments and potential enemy jamming is something Sofge and his team must keep in mind as they develop swarming technology.
The study of swarming behavior at NRL began in the 1990s and was founded on the concept of physicomimetics, a physics-based approach that models the behavior of charged particles interacting with one another. Later, swarming approaches developed at NRL were biology-inspired by animal swarms, such as bees, ants and birds in nature.
“In physicomimetics, you define objects as being particle types and create force laws to describe action between those particle types,” Sofge said. “By choosing your particle type and force laws appropriately you could get swarms of agents to do interesting things, like move in formation and flow around objects.”
Using bio-inspired concepts such as quorum sensing, an ability that bacteria use to communicate and coordinate via signaling molecules, Sofge’s team demonstrated complex group decision-making using simple agent-based behaviors. NRL’s researchers have advanced physicomimetics and nature-inspired techniques for teams of autonomous systems and plan to continue to develop new algorithms for swarming behaviors. The most recent findings have advanced swarm technology and show potential for making advances in human-machine interfaces.
Machine Learning Helps Robot Swarms Coordinate
Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another. Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).
To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS ’18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called “Global-to-Local Safe Autonomy Synthesis,” or GLAS, which imitates a complete-information planner with only local information, and “Neural-Swarm,” a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.
“Our work shows some promising results to overcome the safety, robustness, and scalability issues of conventional black-box artificial intelligence (AI) approaches for swarm motion planning with GLAS and close-proximity control for multiple drones using Neural-Swarm,” says Chung. When GLAS and Neural-Swarm are used, a robot does not require a complete and comprehensive picture of the environment that it is moving through, or of the path its fellow robots intend to take. Instead, robots learn how to navigate through a space on the fly, and incorporate new information as they go into a “learned model” for movement. Since each robot in a swarm only requires information about its local surroundings, decentralized computation can be done; in essence, each robot “thinks” for itself, which makes it easier to scale up the size of the swarm.
“These projects demonstrate the potential of integrating modern machine-learning methods into multi-agent planning and control, and also reveal exciting new directions for machine-learning research,” says Yue. To test their new systems, Chung’s and Yue’s teams implemented GLAS and Neural-Swarm on quadcopter swarms of up to 16 drones and flew them in the open-air drone arena at Caltech’s Center for Autonomous Systems and Technologies (CAST). The teams found that GLAS could outperform the current state-of-the-art multi-robot motion-planning algorithm by 20 percent in a wide range of cases. Meanwhile, Neural-Swarm significantly outperformed a commercial controller that cannot consider aerodynamic interactions; tracking errors, a key metric in how the drones orient themselves and track desired positions in three-dimensional space, were up to four times smaller when the new controller was used.
Swarm Intelligence Market
References and resources also include:
https://phys.org/news/2019-05-insect-behavior-miniature-blimps-key.html
https://www.eurekalert.org/pub_releases/2019-10/njio-nbe103119.php
https://singularityhub.com/2018/02/08/how-swarm-intelligence-is-making-simple-tech-much-smarter/