Home / Technology / AI & IT / Multi-Robot Collaborative Localization

Multi-Robot Collaborative Localization

Humanitarian Crisis scenarios occur frequently and they typically require immediate rescue
intervention. In many cases, the scene conditions may be prohibitive for the human rescuers to provide instant aid, as they have to act within hazardous conditions, like collapsed buildings, air-poisoned, or radioactive environments. Many organizations and research teams are developing rescuing robots in order to assist “human” Search and Rescue (SAR) teams. These mobile robots can be equipped with a variety of sensors, actuators, and embedded processing units, depending on the scenario that they operate on. Because of their special-purpose construction, they can achieve sufficient maneuverability in the terrain that they were designed for and may perform mapping, searching, and reconnaissance procedures by processing the captured data from their sensors. These robots can either be autonomous or remotely-operated. In cases, such as collapsed buildings or densely obstructed areas, the robot remote operation may be limited or inapplicable, so the robot needs to be able to determine its own localization and find its way in the unknown environment

 

There are numerous applications for MAVs such as: Search and Rescue (SaR), aerial inspection, exploration and conservation activities such as wildlife or crop monitoring. These applications all have one thing in common, a requirement for a robust and reliable navigation system. The standard navigation solution for MAVs utilises the Global Positioning System (GPS) as the main localisation solution. However, GPS has several limitations in terms of both accuracy and coverage  and there are many applications where GPS cannot be used, for example indoor SaR

 

The autonomous navigation problem is typically divided into three main challenges:
• Localisation: Where are the MAVs relative to their environment and each other?
• Mapping: What does the environment look like?
• Navigation: What path must the MAV follow in order reach a target location? Also, given a path, what control commands are required to follow it?

 

The localisation problem entails determining the pose (position and orientation) of a robot with respect to its environment based purely on the processing of sensor data. A reliable means of achieving this to give the robot a model of the environment in the same, or similar format, as its sensor data. For example if a robot, equipped with a camera, is given a model of its environment consisting of visual features, then it can solve the localisation problem by comparing the features it sees to those in the model.

 

Constructing a model by hand is not always easy, how does one construct a model of the visual features in a room by hand? One way to solve this problem is to make use of an existing means of localisation. Then a model of the environment can be constructed by incrementally fusing observations of the environment together. This is referred to as mapping the environment. This requires some existing, usually external, localisation system and a separate mapping phase before the system can be deployed to do anything useful. This is not ideal with respect to many applications.

 

It would be ideal if the robots were to be able to localise in novel environments, this means being able to simultaneously localise within and construct a map of a previously unknown environment. This is commonly referred to as the Simultaneous Localisation and Mapping (SLAM) problem.

 

The navigation problem is also divided into two parts: path planning and trajectory execution. Path planning is the problem of determining the route from a robot’s current location to it’s goal location, typically in the shortest time possible while avoiding all obstacles. The output of the path planner is a trajectory to be executed by the robots control system; depending upon the type of robot platform the trajectory execution problem can have its own challenges and constraints. For example, on fixed-wing MAVs trajectory execution can be complicated by the fact that the craft must keep moving in order to stay in the air.

 

The path planning and trajectory execution problems can also be tightly coupled with the SLAM problem, as whatever environment model used for mapping and localisation is typically used for path planning. The reliability of the localisation method also has a big impact on the trajectory execution. For example a visual localisation approach may be affected by motion-blur; therefore, during trajectory execution, the MAV should avoid rapid accelerations as these may result in localisation failure.

 

The increasing number of autonomous systems, such as self-driving cars and UAVs, opens avenues for cooperation among robots through collaborative sensing and communication opportunities. Researchers are leveraging the abundant sensing information exchange among cooperating vehicles to improve localization robustness and safety. For example, a UAV flying above tall buildings will have open sky views with ideal GPS signal reception. Such a UAV can serve as a pseudo GPS satellite to relay healthy GPS signals to vehicles near the ground with poor GPS reception and erroneous signal measurements.

 

 

DARPA Award Aims For Autonomous Teams Of Robots

Giuseppe Loianno, professor of electrical and computer engineering at the NYU Tandon School of Engineering, is investigating novel ways of making robots work as teams to achieve goals, without the need for a remote AI or human “overseer.”

 

He has received a three-year, highly selective and prestigious grant for $1 million from the Defense Advanced Research Projects Agency (DARPA) to support the work. The DARPA Young Faculty Award will enable him to pursue “Integrated Visual Perception, Learning, and Control for Super Autonomous Robots,” a project that addresses the design of visual perception and action models and algorithms to create USARC (Unmanned, Small, Agile, Resilient, and Collaborative) robots capable of executing maneuvers with superior performances compared to human-controlled or current autonomous ground and aerial robots.

 

The project will focus on onboard computer vision and other on-board sensors, as well as aspects of machine learning and control to design novel models and algorithms to jointly solve the perception-action problem for collaborative autonomous navigation of multiple robots. Loianno’s Agile Robotics and Perception Lab (ARPL) will develop frameworks to correlate actions, localization history and future motion prediction in a way that scales to multi-robot settings so that drones or ground-based robots can coordinate with a minimum–or total absence–of communication to reach a common goal.

 

“This will boost robot decision-making time, accuracy, resilience, robustness, and collaboration among robots in multiple tasks and will guarantee scalability to multiple agents and environments” said Loianno.

 

“Because we are using multiple agents, we are able to increase the robustness and resilience to failure of vehicles or sensors, as well as scalability and adaptability to different environments, while reducing the overall task completion time,” he explained. “Autonomous robots working in this way will be able to work efficiently, especially in time-sensitive and dangerous tasks that, almost by definition, involve complex, cluttered, and dynamic environments: search and rescue, security, exploration, and surveillance missions, for example. With better agility and collaboration small robots will also accomplish more tasks in a limited amount of time.”

 

“There will also be benefits in areas like agriculture, where collaborative robots could coordinate to optimize movements to most efficiently cover a field or identify which plants need water or which ones are diseased, for example,” he continued.

 

 

References and Resources also include:

https://scienceblog.com/534015/darpa-award-aims-for-autonomous-teams-of-robots/

 

 

 

About Rajesh Uppal

Check Also

Loitering Munitions: Redefining Precision Strike Capabilities

In the ever-evolving landscape of modern warfare, the need for precision strike capabilities has become …

error: Content is protected !!