Home / Military / Air Force / Researchers developing efficient Vision Aided Navigation for UAV in GPS-Denied environments like building, underground or under the forest canopy

Researchers developing efficient Vision Aided Navigation for UAV in GPS-Denied environments like building, underground or under the forest canopy

UAV navigation can be seen as a process that robots make a plan on how to safely and quickly reach the target location, which mostly relies on current environment and location. In order to successfully complete the scheduled mission, a UAV must be fully aware of its states, including location, navigation speed, heading direction as well as starting point and target location.

 

To remain air-borne and fly autonomously between waypoints, Unamnned air vehicles must monitor their pose (position and orientation) and velocity (rate of change in pose), collectively called a navigation solution. Typically, this monitoring is accomplished by a navigation system reading from a suite of sensors, such as GPS and/or an altimeter sensing position, a compass sensing yaw, gyros sensing angular velocity, and accelerometers sensing specific force. Sensor readings can be used directly or integrated or differentiated to infer pose and velocity. Among the suite of complementary and redundant sensors, GPS is typically the only sensor that can measure a vehicle’s absolute 3D position.

 

As UAVs become smaller, their flight space will expand to include flying within urban canyons and eventually inside buildings. When a firefighter, first responder or soldier operates a small, lightweight flight vehicle inside a building, in urban canyons, underground or under the forest canopy, the GNSS-denied environment presents unique navigation challenges. Flying within these environments is very challenging from a navigation perspective, as GPS is largely denied or degraded by dropouts and multi-path effects.

 

Traditional navigation approaches based on filtering GPS and low cost inertial sensors produce unacceptably high drift during extended GPS outages. Accelerometers and gyros can be integrated to estimate change in velocity or position, but this estimate drifts rapidly, on the order of 90 m in 1 min for a standard Microelectromechanical System (MEMS) inertial measurement unit (IMU) or 10m over 2min for a tactical grade MEMS.

 

Attempts have been made to close this information gap and give UAVs alternative ways to navigate their environments without GNSS. But many of these attempts have resulted in further information gaps, especially on UAVs whose speeds can outpace the capabilities of their onboard technologies. For instance, scanning LiDAR routinely fails to achieve its location-matching with accuracy when the UAV is flying through environments that lack buildings, trees and other orienting structures.

 

In many cases loss of GNSS signals can cause these vehicles to become inoperable and, in the worst case, unstable, potentially putting operators, bystanders and property in danger. As vehicles are called on increasingly to navigate in urban canyons, build-ings, and similar surroundings, they will require an absolute position sensor to replace denied or degraded GPS.

 

A promising remedy is to augment navigation with an onboard vision sensor that tracks visual landmarks to infer vehicle motion. The vision-based navigation proves to be a primary and promising research direction of autonomous navigation with the rapid development of computer vision. First, the visual sensors can provide abundant online information of the surroundings; second, visual sensors are highly appropriate for perception of dynamic environment because of their remarkable anti-interference ability; third, most of visual sensors are passive sensors, which also prevent the sensing system from being detected.

 

Imagery from the camera can be processed to generate lines of sight (LOS) to 3D landmarks in the world. These LOS can be used to triangulate the absolute pose of the camera and thus the vehicle. They constrain absolute pose and can augment GPS when it is available, or temporarily replace it when it is denied.

 

Researchers have developed several options for initializing the range to landmarks in the navigation filter, including motion stereo. The system geo-locates a set of landmarks while GPS provides accurate navigation. Once GPS is lost, the system combines estimated landmark locations with new observations of these landmarks to both navigate and geolocate new landmarks in a process called boot-strapping.

 

Vision Sensors

visual sensors typically include the followings: (a) monocular cameras, (b) stereo cameras, (c) RGB-D cameras, and (d) fisheye cameras.

 

Monocular cameras are especially suited for applications where compactness and minimum weight are critical, in addition to that, lower price and flexible deployment make them a good option for UAVs. However, monocular cameras are not able to obtain depth map.

 

A stereo camera is actually a pair of the same monocular cameras mounted on a rig, which means that it provides not only everything that a single camera can offer but also something extra that benefits from two views. Most importantly, it can estimate depth map based on the parallax principle other than the aid of infrared sensors.

 

RGB-D cameras can simultaneously obtain depth map and visible image with the aid of infrared sensors, but they are commonly used in indoor environment due to the limited range. Fisheye cameras are a variant of monocular cameras which provide wide viewing angle and are attractive for obstacle avoidance in complex environments, such as narrow and crowded space.

 

Vision Aided Navigation

Considering the environment and prior information used in navigation, visual localization, and mapping systems can be roughly classified into three categories: mapless system, map-based system, and map-building system.

 

Mapless system performs navigation without a known map, and UAVs navigate only by extracting distinct features in the environment that has been observed. Currently, the most commonly used methods in mapless system are optical flow methods and feature tracking methods.

 

Map-based system predefines the spatial layout of environment in a map, which enables the UAV to navigate with detour behavior and movement planning ability. Generally, there are two types of maps: octree maps and occupancy grids maps. Different types of maps may contain varying degrees of detail, from the 3D model of complete environment to the interconnection of environmental elements.

 

Sometimes, due to environmental constraints, it is difficult to navigate with a preexisting accurate map of the environment. Moreover, in some emergent cases (such as disaster relief), it would be impractical to obtain a map of the target area in advance. Thereby under such circumstances, building maps at the same time as flight would be a more attractive and efficient solution.

 

Map-building system has been widely used in both autonomous and semi-autonomous fields, and is becoming more and more popular with the rapid development of visual simultaneous localization and mapping (visual SLAM) techniques.

 

Nowadays, UAVs are getting much smaller than before, which limits their payload capacity to certain extents. Therefore, researchers have been showing more interest in the usage of simple (single and multiple) cameras rather than the traditional complex laser radar and sonar, etc.

 

Draper, MIT Researchers Equip UAV with Vision for GNSS-Denied Navigation

In an effort to offset problems caused by loss of GNSS signals — a potentially dangerous situation for first responders among others — a team from Draper Laboratory and the Massachusetts Institute of Technology (MIT) has developed advanced vision-aided navigation techniques for unmanned aerial vehicles (UAVs) that do not rely on external infrastructure, such as GPS, detailed maps of the environment or motion capture systems.

 

Working together under a contract with the Defense Advanced Research Projects Agency (DARPA), Draper and MIT created a UAV that can autonomously sense and maneuver through unknown environments without external communications or GNSS under the Fast Lightweight Autonomy (FLA) program. The team developed and implemented unique sensor and algorithm configurations, and has conducted time-trials and performance evaluations in indoor and outdoor venues.

 

Draper’s contribution to the DARPA FLA program—documented in a research paper for the Aerospace Conference, 2017 IEEE — is described as a novel approach to state estimation (the vehicle’s position, orientation and velocity) called SAMWISE (Smoothing And Mapping With Inertial State Estimation). SAMWISE is a fused vision and inertial navigation system that combines the advantages of both sensing approaches and accumulates error more slowly over time than either technique on its own, producing a full position, attitude and velocity state estimate throughout the vehicle trajectory. The result is a navigation solution that enables a UAV to retain all six degrees of freedom and allows it to fly autonomously without the use of GNSS or any communication with vehicle speeds of up to 45 miles per hour, according to Draper.

 

Draper and MIT’s sensor– and camera-loaded UAV was tested in a number of environments ranging between cluttered warehouses and mixed open and tree filled outdoor environments with speeds up to 10 m/s in cluttered areas and 20 m/s in open areas.

 

 

“A faster, more agile and autonomous UAV means that you’re able to quickly navigate a labyrinth of rooms, stairways and corridors or other obstacle-filled environments without a remote pilot,” said Ted Steiner, Senior Member of Draper’s Technical Staff. “Our sensing and algorithm configurations and unique monocular camera with IMU-centric navigation gives the vehicle agile maneuvering and improved reliability and safety — the capabilities most in demand by first responders, commercial users, military personnel and anyone designing and building UAVs.”

 

 

“The biggest challenge with unmanned aerial vehicles is balancing power, flight time and capability due to the weight of the technology required to power the UAVs,” said Robert Truax, senior member of technical staff at Cambridge, Massachusetts-based Draper. “What makes the Draper and MIT team’s approach so valuable is finding the sweet spot of a small size, weight and power for an air vehicle with limited onboard computing power to perform a complex mission completely autonomously.”

 

 

The team’s focus on the FLA program has been on UAVs, but advances made through the program could potentially be applied to ground, marine and underwater systems, which could be especially useful in GNSS-degraded or denied environments. In developing the UAV, the team leveraged Draper and MIT’s expertise in autonomous path planning, machine vision, GNSS-denied navigation and dynamic flight controls.

 

 

 

 

References and Resources also include:

http://insideunmannedsystems.com/draper-mit-researchers-equip-uav-vision-gnss-denied-navigation/

https://www.researchgate.net/publication/268570442_Vision-Aided_Navigation_for_Small_UAVs_in_GPS-Challenged_Environments

https://www.tandfonline.com/doi/full/10.1080/10095020.2017.1420509

About Rajesh Uppal

Check Also

Unlocking the Potential of Optical wireless communication with Analog Computing and Photonic Chips

In today’s digitally dominated world, where every piece of information is stored, processed, and transmitted …

error: Content is protected !!