LIDARs are rapidly gaining maturity as very capable sensors for number of applications such as imaging through clouds, vegetation and camouflage, 3D mapping and terrain visualization, meteorology, navigation and obstacle avoidance for robotics as well as weapon guidance. United Kingdom’s Environment Agency has been using Lidar for environmental purposes, such as for planning flood defenses or tracking eroding coastlines. In the U.S., police already use LIDAR as an alternative to radar to catch speeders. They can identify and pinpoint the location of intruders through advanced object identification algorithms.
They have also proved useful for disaster management missions; emergency relief workers could use LIDAR to gauge the damage to remote areas after a storm or other cataclysmic event. After the January 2010 Haiti earthquake; a single pass by a business jet flying at 10,000 feet over Port-au-Prince was able to display the precise height of rubble strewn in city streets enabled by 30 centimeters resolution LIDAR
Lidars (Light Detection and Ranging) are similar to radars in that they operate by sending light pulses to the targets and calculate distances by measuring the received time. Since they use light pulses that have about 100,000 times smaller wavelength than radio waves used by radar, they have much higher resolution.
LIDAR data is both high-resolution and high-accuracy, enabling improved battlefield visualization, mission planning and force protection. LIDAR provides a way to see urban areas in rich 3-D views that give tactical forces unprecedented awareness in urban environments.
Having a detailed urban surface model is beneficial to identify lines of sight, which are critical for a variety of tactical applications. For example, LiDAR has been used to identify suitable observation posts, locations for cover and concealment during operations, and sites for locating communications transmission and interception equipment.
Obstacle detection and navigation of Autonomous Vehicles
With the explosion in development of self-driving platforms, demand for 3D LIDAR has grown drastically. One of the important sensor in driverless vehicles is LiDAR which is used for obstacle detection and their navigation.The Google’s self-driving cars use Velodyne rotating LIDAR HDL-64E module consisting of an array of 64 lasers to identify oncoming vehicles like cars, bicycles, pedestrians and also detect small hazards close by, on the ground.
LIDARS by steering the transmitted light, can generate a millimeter-accurate 3D representation of its surroundings that’s called a point cloud. These accurate point cloud images are compared with 3D maps of the roads known as prior maps stored in memory using well-known algorithms, aligning them as closely as possible. That makes it very easy to identify with sub-centimeter precision where the car is on the road.
Velodyne LiDAR, a developer of real-time LiDAR sensors, has announced a partnership agreement with Dibotics under which Dibotics will provide consulting services to Velodyne LiDAR customers who require 3D SLAM software. In robotic mapping, SLAM (simultaneous localization and mapping) involves the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a specific point within it.
However, the high cost and low reliability of LiDAR systems have been fundamental barriers to the adoption of self-driving vehicles, according to Steve Beringhause, Executive Vice President and Chief Technology Officer, Sensata Technologies. The high quality optical components including lasers raises the price of LiDAR sensors from around several thousand dollars up to $85,000 per sensor.
However with arrival of solid state LiDAR sensors the price is expected to decrease by many orders of magnitude. This new design will be less expensive, easier to integrate due to their smaller size, and more reliable as a result of fewer moving parts. Quanergy have announced that they are about to begin full scale manufacturing of their S3, the world’s first affordable solid-state LiDAR, coming in at about $250.
for more information on driver less vehicles: http://idstch.com/home5/international-defence-security-and-technology/military/land-230/driverless-cars-moving-toward-complete-autonomy-key-technologies-developed/
Quanergy Announces Solid-State LIDAR for Cars and Robots
Quanergy, an automotive startup based in Sunnyvale, Calif., have developed S3, a solid-state LIDAR system designed primarily to bring versatile, comprehensive, and affordable sensing to autonomous cars. The S3 is small, has no moving parts, instead it uses an optical phased array as a transmitter, which can steer pulses of light by shifting the phase of a laser pulse as it’s projected through the array.
This allows each pulse to be steered completely independently and refocussed almost instantly in a completely different direction. Field of view is 120 degrees both horizontally and vertically. The minimum range is 10 centimeters, and the maximum range is at least 150 meters at 8 percent reflectivity. At 100 meters, the distance accuracy is +/- 5 cm, and the minimum spot size is just 9 cm. The S3 itself is 9 cm x 6 cm x 6 cm. Produced in volume, an S3 unit will cost $250 or less.
“If you want to achieve a high degree of autonomy, you need a sensor that has a 360 degree field of view, a long range, and high resolution,” says Anand Gopalan, vice president of Velodyne’s R&D.
However in the current sensor only transmitter side is integrated on ASIC. “As we move forward, I think the thought process is to build an engine that encompasses all the functionality of a lidar: the transmission and reception of light and the calculation of intensity and time of flight, all of that, in such a small form factor that now it can be used in a variety of different configurations.”
Velodyne LiDAR’s solid-state LiDAR sensors is based on monolithic gallium nitride (GaN) integrated circuit technology developed in partnership with Efficient Power Conversion (EPC). “What you’re seeing is the GaN integrated circuit that forms the heart of the solution that we’re proposing. This is the chip that does the transmit function. It’s not beamforming.”
Military and Security Applications: Airborne 3D imaging
Airborne LIDARs can calculate accurate three dimensional coordinates of both manmade and naturally occurring targets by combining the laser range, laser scan angle, laser position from GPS, and laser orientation from Inertial Navigation System. Additional information about the object, like its velocity or material composition, can also be determined by measuring certain properties of the reflected signal, such as the induced Doppler shift.
DARPA’s High-Altitude LIDAR Operations Experiment (HALOE) – First deployed to Afghanistan in 2010 – provided unprecedented access to high-resolution 3D geospatial data for tactical missions, like that needed for helicopter crews to find landing zones. The HALOE sensor pod can collect data more than 10 times faster than state-of-the-art systems and 100 times faster than conventional systems. “At full operational capacity, the HALOE system can map 50 percent of Afghanistan in 90 days,” the DARPA head Regina Dugan said, “whereas previous systems would have required three years.” Navy plans to test LIDAR on a Firescout, a robotic helicopter, to help spot pirates. DARPA is now using HALOE data to create 3D holographic images of urban environments, which soldiers can use to visualize the areas they’re about to enter.
Rapid development of small UAVs with higher quality propulsion systems, inertial navigation systems, electronics, and algorithms is also ongoing. Concurrently, ladar (i.e., laser detection and ranging) sensors that have high reliability, accuracy, pulse repetition frequency, large data capacities, and advanced data analysis techniques are being produced. This drive for the miniaturization of ladar and UAV systems provides new possibilities for 3D imaging from the air.
Swedish Defence Research Agency (FOI) have been working on demonstrating the possibilities for airborne sensor systems, especially 3D imaging ladar on different multi-rotor UAVs for research and development purposes. With UAVs we can cover larger survey areas, and detect other objects or regions of interest (e.g., those that are obscured by high levels of vegetation), than we can with systems based on ground vehicles.
BAE Systems to develop airborne mine-hunting LIDAR
BAE Systems has been awarded a £15.5 million contract by the U.S. Department of Defense (DoD) to manufacture and deliver Archerfish mine neutralisers, continuing its support to the U.S. Navy’s minesweeping operation. Archerfish is a remotely-controlled underwater vehicle equipped with an explosive warhead to destroy sea mines. In addition to Archerfish mine neutralisers, manufactured at BAE Systems’ Broad Oak facility in Portsmouth, United Kingdom, the contract also includes the supply of fibre-optic spools. The fibre-optic spools provide a communications link between the Archerfish mine neutraliser and the launch platform, an MH-60S helicopter deployed from the U.S. Navy’s Littoral Combat Ships.
Officials of the U.S. Office of Naval Research in Washington announced an $8.9 million contract to BAE Systems Spectral Solutions LLC in Honolulu for development of a multi-sensor suite and onboard processing to detect, identify, and pinpoint moored and drifting sea mines from manned and unmanned aircraft. The sensor suite will consist of a visible-to-near-infrared (VNIR) multispectral imaging sensor, a broadband longwave infrared sensor (LWIR), and a 2D light detection and ranging (lidar) sensor, Navy researchers say.
LIDAR on unmanned surface vessel
Leidos has completed initial performance trials of the technology demonstration vessel it is developing for the Defense Advanced Research Projects Agency (DARPA)’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program. The at-sea tests took place off the coast of San Diego, California.
DARPA’s Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV) program seeks to develop a new type of unmanned surface vessel that could independently track adversaries’ ultra-quiet diesel-electric submarines over thousands of miles.
The technologies sought by the U.S. Defense Advanced Research Projects Agency are sensor systems and image-processing hardware and software that use electro-optical/infrared or light detection and ranging, or LIDAR, approaches for onboard systems to detect and track nearby surface vessels and potential navigation hazards, and classify those objects’ characteristics.
The DARPA RFI invited responses that explore some or all of the following technical areas:
Maritime Perception Sensors: Any combination of non-radar-based imaging and tracking methods, including, but not limited to, passive and active imagers in the visible and infrared wavelengths and Class 1 Laser Rangefinder (LRF) and Flash LIDAR to image ships during day or night in the widest variety of environmental conditions, including haze, fog and rain, over ranges from 4 km to 15 km
Maritime Perception Software: Algorithms and software for detection, tracking and classification of ships by passive optical or non-radar active imagers
Classification Software for Day Shapes/Navigation Lights: Algorithms and software to support detection, tracking and classification of day shapes and navigation lights—standard tools that vessels use to communicate a ship’s position and status—by using passive optical or non-radar active imagers
The global LiDAR market is expected to reach USD 1.34 billion by 2024, according to a new report by Grand View Research, Inc. The military laser systems market is estimated to be USD 3.03 Billion in 2015, and is projected to reach USD 4.63 Billion by 2020, at a CAGR of 8.86% from 2015 to 2020, according to report by MarketsandMarkets.
LiDAR technology applications growth is predicted in government and commercial areas, which include railways, roadways management, and forestry, like urban planning and advanced driver assistance systems (ADAS). The advanced driver assistance systems ADAS application segment is expected to grow at a remarkable pace owing to the surging incorporation in automotive safety and forward-collision avoidance systems; for instance, in Automatic Emergency Braking (AEB) systems to reduce the number of car crashes.
The surging acceptance in environmental mapping and automobile safety application arenas is expected to boost the market growth. The increasing acceptance in the construction and architectural segment for monitoring & 3D-modeling applications is further anticipated to bolster the market growth. Study by Grand View Research, Inc. found that technological advancements in automated processing ability in image processing and data processing capabilities are anticipated to be the major factors to spur growth.
Increased adoption of direct visualization using point splatting, façade modeling using LASERMAPs, and automated indoor modeling are some of key factors expected to drive the demand over the forecast period. Increasing speed of acquisition, better handling of sensors, and higher accuracy would have a major impact on the general use of low-cost mobile mapping systems.
LiDAR systems for space-based active remote sensing of the earth face a number of challenges owing to high power requirement in various systems. Lack of ability to sense the environment and use of perceptual information for controlling the technology are the major challenges faced by the operators. However, flash systems based on multi-element and 2D sensor arrays yield 3D imaging data, but are restricted from installing in UGV and robotic platforms owing to high cost.
The key industry participants include Velodyne LiDAR, Inc. The leading vendors in the market are 3D Laser Mapping, Aerometric, Airborne Hydrography, Avent Lidar Technology and DigitalWorld Mapping. Other vendors in LiDAR Market are Airborne Imaging, Faro Technologies, Firmatek, Leica Geosystems, Mosaic 3d, Optech, RIEGL Laser Measurement Systems and Trimble Navigation.
Critical Technologies of LIDAR being developed
One of the critical technology requirement are Microchip lasers that are safe for eyes at higher pulse powers and that can operate across a wide range of wavelengths and provide high pulse repetition rates. Meanwhile, new advances in chip-based arrays of emitters could make it easier to send out light without spinning parts.
Todays LIDAR systems use lasers, lenses, external receivers and require mechanical wear components for beam steering that can limit scan rates, increase complexity and impact long-term reliability. Plus, they can cost between $1,000 and $70,000.
MIT researchers have developed a 64-by-64 dimensional nanophotonic phased array (NPA) of silicon antennas that could take a single laser beam and send it wherever the user wanted by tweaking voltages on the chip.
DARPA’s SWEEPER, Lidar-on-a-Chip
DARPA’s Short-range Wide-field-of-view Extremely agile Electronically steered Photonic EmitteR (SWEEPER) program has successfully integrated breakthrough non-mechanical optical scanning technology onto a microchip.
SWEEPER technology has demonstrated that it can sweep a laser back and forth using arrays of many small emitters that each put out a signal at a slightly different phase. The new phased array thus forms a synthetic beam that it can sweep from one extreme to another and back again more than 100,000 times per second, 10,000 times faster than current state-of-the-art mechanical systems.
It can also steer a laser precisely across a 51-degree arc, the widest field of view ever achieved by a chip-scale optical scanning system. The SWEEPER technology utilized a solid-state approach built on modern semiconductor manufacturing processes to place array elements only a few microns apart with each other that is required at optical frequencies.
DARPA says there’s every reason to expect the technology to lend itself to mass production, lowering the cost per chips and their wide adoption like in cars, robocopters, e.t.c
MIT and DARPA Pack Lidar Sensor onto Single Chip
A team at MIT’s Photonic Microsystems Group have integrated LIDAR systems onto a single microchip that can be mass produced in commercial CMOS foundries yielding a potential on-chip LIDAR system cost of about $10 each. Instead of a mechanical rotation system, optical phased arrays with multiple phase controlled antennas emitting arbitrary beam patterns may make devices more robust.
MIT’s Photonic Microsystems Group is trying to take these large, expensive, mechanical lidar systems and integrate them on a microchip that can be mass produced in commercial CMOS foundries. At MIT our lidar on a chip work first began with the development of 300-mm silicon photonics, making their potential production cost on the order of $10 each at production volumes of millions of units per year.
These on-chip devices promise to be orders of magnitude smaller, lighter, and cheaper than lidar systems available on the market today. They also have the potential to be much more robust because of the lack of moving parts. The non-mechanical beam steering in this device is 1,000 times faster than what is currently achieved in mechanical lidar systems, and potentially allows for an even faster image scan rate. This can be useful for accurately tracking small high-speed objects that are only in the lidar’s field of view for a short amount of time, which could be important for obstacle avoidance for high-speed UAVs.
Our device is a 0.5 mm x 6 mm silicon photonic chip with steerable transmitting and receiving phased arrays and on-chip germanium photodetectors. The laser itself is not part of these particular chips, but our group and others have demonstrated on-chip lasers that can be integrated in the future. In order to steer the laser beam to detect objects across the LIDAR’s entire field of view, the phase of each antenna must be controlled.
In this device iteration, thermal phase shifters directly heat the waveguides through which the laser propagates. The index of refraction of silicon depends on its temperature, which changes the speed and phase of the light that passes through it. As the laser passes through the waveguide, it encounters a notch fabricated in the silicon, which acts as an antenna, scattering the light out of the waveguide and into free space. Each antenna has its own emission pattern, and where all of the emission patterns constructively interfere, a focused beam is created without a need for lenses.
MIT’s current on-chip LIDAR system can detect objects at ranges of up to 2 meters, though they hope within a year to achieve a 10-meter range. The minimum range is approximately 5 cm, and the team has demonstrated centimeter longitudinal resolution and expect 3-cm lateral resolution at 2 meters. There is a clear development path towards LIDAR on a chip technology that can reach a 100-meter range, with the possibility of going even farther.
We believe that commercial lidar-on-a-chip solutions will be available in a few years say Christopher V. Poulton and Michael R. Watts.” A low-cost, low-profile lidar system such as this would allow for multiple inexpensive lidar modules to be placed around a car or robot. These on-chip lidar systems could even be placed in the fingers of a robot to see what it is grasping because of their high resolution, small form factor, and low cost.”
Focal Plane Arrays
Another critical technology for development of 3D imaging laser radars is Focal Plane Array (FPA) detectors with timing capability in each pixel. A larger focal plane array (FPA) that supports larger fields of view can illuminate an entire scene with a single pulse, similar to the effect produced by a traditional flash on a camera. Flash LiDAR applications drive the development of large, highly sensitive detector arrays that integrate detection, timing and signal processing.
MIT’s Lincoln Laboratory has developed a array of more than 4,096 X 4,096 pixels FPA based on indium gallium arsenide semiconductor, which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and thus longer ranges for airborne laser scanning.
“Beam-shaping adaptive optics will optimize laser light to maximize the number of photons received at each focal plane array pixel. Adaptive optics can correct atmosphere-induced phase distortion of optical signals. On a related note, diffractive optical elements represent a complementary technology to optimize beam shape based on FPA configuration,” writes Felton A. Flood in SIGNAL.
LiDAR imaging systems generate huge amount and require advanced data processing techniques and compression algorithms to support 3-D image visualization and feature extraction. Advanced hardware and algorithms are required to manage large amounts of data in real time at the speeds required to support complex autonomous navigation and decision making.
LGS Innovations Completes Laser Radar Technology Effort with DARPA
LGS Innovations announced today the successful completion of a two-year Laser Radar Technology (LRT) effort in partnership with the Strategic Technology Office (STO) within the Defense Advanced Research Projects Agency (DARPA). The LRT program supports the development of detector arrays and laser transmitter technologies that could improve the ability for a LIDAR system to switch between settings geared to detect objects of interest and settings geared to hone in and provide additional insight on the selected object.
“This breakthrough required developing a laser with the ability to produce a wide range of optical waveforms, and the ability to change waveforms in real-time while operating at full power,” said Stephan Wielandy, Chief Scientist for Photonics Applications for LGS Innovations Advanced Research and Technology division. “To our knowledge, no laser with the ability to meet all of these waveform agility requirements has ever been made before.”
Breakthrough in Self-Sweeping Laser Technology
A team of UC Berkeley engineers led by Connie Chang-Hasnain, a professor of electrical engineering and computer sciences used a novel concept to automate the way a light source changes its wavelength as it sweeps the surrounding landscape.
In LIDARs, as the laser moves along, it must continuously change its frequency so that it can calculate the difference between the incoming, reflected light and the outgoing light. This requires a precise movement of mirrors within the laser cavity. To change the frequency, at least one of the two mirrors in the laser cavity must move precisely.
“The mechanisms needed to control the mirrors are a part of what makes current LIDAR and OCT systems bulky, power-hungry, slow and complex,” says study lead author Weijian Yang, “The faster the system must perform — such as in self-driving vehicles that must avoid collisions — the more power it needs.”
The novelty of the new design is the integration of the semiconductor laser with an ultra-thin, high-contrast grating (HCG) mirror. The HCG mirror, consisting of rows of tiny ridges, is supported by mechanical springs connected to layers of semiconductor material. The force of the light, an average force of just a few nanonewtons, causes the top mirror to vibrate at high speed. The vibration allows the laser to automatically change color as it scans.
Each laser can be as small as a few hundred micrometers square, and it can be readily powered by an AA battery. “Our paper describes a fast, self-sweeping laser that can dramatically reduce the power consumption, size, weight and cost of LIDAR devices on the market today,” says Chang-Hasnain, chair of the Nanoscale Science and Engineering Graduate Group at UC Berkeley. “The advance could shrink components that now take up the space of a shoebox down to something compact and lightweight enough for smartphones or small UAVs [unmanned aerial vehicles].”
Multifunctional Coherent Doppler Lidar based on Laser Linear Frequency Modulation
Farzin Amzajerdian, Diego Pierrottet, Larry Petway, Bruce Barnes, and George Lockard of Langley Research Center have developed multifunctional coherent Doppler lidar based on linear frequency modulation of a continuous-wave (CW) laser beam with a triangular waveform.
The motivation for developing this technique was its application in a coherent Doppler lidar system that could enable precision safe landing on the Moon and Mars by providing accurate measurements of the vehicle altitude and velocity. The multifunctional coherent Doppler lidar is capable of providing high-resolution altitude, attitude angles, and ground velocity, and measuring atmospheric wind vector velocity. It can operate in CW or quasi-CW modes.
In the linear frequency modulation waveform of a laser beam, the modulation waveform has a triangular shape. The transmitted waveform is delayed by the light round-trip time, upon reflection from the target. When mixing the delayed return waveform with the transmitted waveform at the detector, an interference signal will be generated whose frequency is equal to the difference between the transmit and receive frequencies. This frequency is directly proportional to the target range. When the target or lidar platform is not stationary during the beam round-trip time, the signal frequency will be shifted due to the Doppler effect. Therefore, by measuring the frequency during “up chirp” and “down chirp” periods of the laser waveform, both the target range and velocity can be determined.
References and Resources also include: