LIDARs are rapidly gaining maturity as very capable sensors for a number of applications such as imaging through clouds, vegetation and camouflage, 3D mapping and terrain visualization, meteorology, navigation, and obstacle avoidance for robotics as well as weapon guidance. LIDAR data is both high-resolution and high-accuracy, enabling improved battlefield visualization, mission planning, and force protection. LIDAR provides a way to see urban areas in rich 3-D views that give tactical forces unprecedented awareness in urban environments.
To successfully perform various missions, both high spatial resolution and range precision are needed, and the system also often needs to obtain high-resolution 3D images in a relatively short period of time. There are two methods that LIDAR systems typically employ to obtain 3D images, a scanning type, and a flash type.
Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.
The scanning LIDAR system uses one or a few detector pixels and a scanner to acquire 3D images. Laser pulses are sent out from a laser system, and each laser pulse is directed to a different point on the target by a scanner; then its time-of-flight (TOF) is obtained for each target point, using a single detector pixel. The scanning LIDAR system requires a significant time overhead to acquire a high-resolution 3D image because the system needs to scan each point. This means the system requires increasingly more time to take measurements, to obtain ever higher resolution 3D images.
Scanning LIDAR has many challenges. They have moving parts so they can scan the world, but this adds complexity and their refresh rates tend to be sluggish. Scanning LiDARs are also sensitive to distortion due to the movement of the scanning car and the movement of the objects being scanned.
Flash lidar dispenses with the scanning approach altogether and operates more like a camera. A laser beam is diffused so it illuminates an entire scene in a single flash. Then a grid of tiny sensors captures the light as it bounces back from various directions. One big advantage of this approach is that it captures the entire scene in a single instant, avoiding the complexities that occur when an object—or the lidar unit itself—moves while a scan is in progress.
Flash LIDAR system utilizes a 2D array detector and a single laser pulse, illuminating the entire interesting scene to acquire 3D images. To acquire real-time images of moving targets, it is necessary to obtain the 3D image with a single laser pulse. Flash LIDAR systems can obtain images with just a single laser pulse, which makes it possible for them to obtain 3D images of moving targets. Moreover, 3D images can be acquired even when the LIDAR system or the target are in motion.
3D Flash LIDAR cameras operate and appear very much like 2D digital cameras. 3D focal plane arrays have rows and columns of pixels, also similar to 2D digital cameras but with the additional capability of having the 3D “depth” and intensity. Each pixel records the time the camera’s laser flash pulse takes to travel into the scene and bounce back to the camera’s focal plane (sensor). A short duration, large-area light source (the pulsed laser) illuminates the objects in front of the focal plane as the laser photons are “backscattered” towards the camera receiver by the objects in front of the camera lens. This photonic energy is collected by the array of smart pixels, where each pixel samples the incoming photon stream and “images” depth (3D) and location (2D), as well as reflective intensity. Each pixel has independent triggers and counters to record the time-of-flight of the laser light pulse to the object(s). The physical range of the objects in front of the camera is calculated and a 3D point cloud frame is generated at video rates (currently possible up to 60 frames/second).
Currently, twenty or forty-four analog samples are captured for each pixel per pulse allowing for accurate pulse profiling. The 16,384 data points per single flash (frame) that ASC cameras capture allows for high-rate dynamic scene capture and 3D videos that LADAR scanners are unable to accomplish. Without moving or other mechanical parts to add weight or are subject to wear, make ASC cameras small, light and durable, without being subject to motion distortion.
As an emerging technology, 3D Flash LIDAR has a number of advantages over conventional point (single pixel) scanner cameras and stereoscopic cameras, including:
- Full frame time-of-flight data (3D image) collected with a single laser pulse
- Full frame rates (high) achievable with area array technology
- Unambiguous direct calculation of range
- Blur-free images without motion distortion
- Co-registration of range and intensity for each pixel
- Pixels are perfectly registered within a frame
- Ability to represent the objects in the scene that are oblique to the camera
- No need for precision scanning mechanisms
- Combine 3D Flash LIDAR with 2D cameras (EO and IR) for 2D texture over 3D depth
- Possible to combine multiple 3D Flash LIDAR cameras to make a full volumetric 3D scene
- Smaller and lighter than point scanning systems
- No moving parts
- Low power consumption
- Ability to “see” into obscurants (range-gating)
One significant advantage of 3D Flash LIDAR sensor array technology is that 3D movies can be acquired at the laser pulse repetition frequency, making 3D video a reality. This capability enables real time machine vision. High frame rates mean that topographical mapping can be obtained more rapidly than with point scan technology, decreasing the amount of flight time required to scan and capture an area. The inherent weight savings means a vehicle such as a UAV/UAS can stay aloft that much longer, and the pixel-to-pixel registration greatly reduces the need for precise pointing knowledge when stitching together images to create large maps. Another not-so-intuitive advantage is that single laser pulse Flash 3D images are generally immune to platform motion, platform vibration and object motion due to the speed-of-light capture of the data frame.
Flash LiDAR also has some significant disadvantages.
“The larger the pixel, the more signal you have,” Sanjiv Singh, a robotics expert at Carnegie Mellon, told Ars. Shrinking photodetectors down enough to squeeze thousands of them into a single array will produce a noisier sensor. “You get this precipitous drop in accuracy.” What this means in practice is that flash lidar isn’t well suited for long-range detection. And that’s significant because experts believe that fully self-driving cars will need lidar capable of detecting objects 200 to 300 meters away.
The range precision of the flash LIDAR system is limited by several factors, including the pulse width of the laser, the bandwidth of the detector, the temporal resolution of the timing circuit, shot noise, the timing jitters generated by electronics, and so on. These factors also limit the ability to improve range precision. Typically, the range precision of a commercial flash LIDAR system is several centimeters . Increasing the number of pixels in the APD array with a higher performance detector and timing circuit is technically challenging.
Another critical issue of solid-state 3D flash LiDARs is the limited field of view (FoV) because it cannot rotate and scan the surroundings like a scanning-type (mechanical) LiDAR does (for example, Velodyne puts its LiDAR sensors on motorized turntables, shooting lasers in a 360 arc for Waymo). Having multiple solid-state 3D flash LiDARs in a car is the solution for limited FOV, and this must require advanced and faster image processing capability. That is why most of the next-generation autonomous driving controllers, made by Intel’s Mobileye division or Nvidia among many others, emphasize sensor fusion, computer vision and advanced wafer process node to achieve optimal performance
Flash LIDAR has many applications such as Situational awareness, collision avoidance, adaptive cruise control, surveillance, restricted area event alerts, object identification, day-night-rain-fog imaging, aviation take-off and landing, lunar and planetary hazard avoidance and landing, automated rendezvous and docking in space, mid-air refueling, terrain mapping, autonomous navigation, smart intersection monitoring and control, unmanned ground vehicles, unmanned air systems and vehicles, machine vision, hazard material detection and handling and underwater 3D imaging.
LIDAR elevation data also supports military with improved battlefield visualization, line-of-sight analysis and urban warfare planning.
Performing smooth take-offs and landings has always been a challenge for commercial UAVs and drones, which often carry expensive and sophisticated payloads onboard. Knowing the exact distance to the ground below is crucial, yet obtaining reliable altitude data remains an issue with the traditional equipment and sensors used in this industry.
The LeddarOne is a very compact (2-inch diameter), single-beam flash Lidar module that is entirely dedicated to a single-point measurement. Its 3-degree, diffuse beam provides a measurement range up to 40 m (130 ft) with an accuracy of < 5 cm at an acquisition rate of up to 140 Hz. When comparing the LeddarOne with other optical sensors, it rapidly became clear that the use of a diffuse infrared light beam (generated from LED instead of using a narrow, collimated laser beam) was a distinctive advantage. The LED’s wide-beam pattern, coupled with proprietary Leddar digital signal accumulation and oversampling, helped to smooth the terrain measurements and ensured consistent readings, especially when flying over brush, bush, or tall grass. In comparison, sensors using collimated laser beams tended to return variations in altitude, which may be unwanted and confuse the autopilot. In the end, the LeddarOne, which uses the patented Leddar digital signal processing technology that has been developed and optimized over a decade of R&D and successful implementations in various industries from traffic management to collision avoidance, was selected as the best altimeter.
Sense Photonics Sees Flash LiDAR As Best Path To Full Self-Driving Tech
Today, most autonomous vehicles in test fleets have a cumbersome assembly atop their roofs with a revolving coffee can-like device that emits pulsing lasers beams that bounce off buildings, traffic signs, pedestrians and other tangible objects surrounding the vehicle. That enables the vehicle to avoid collisions and understand its immediate environment. It’s also aerodynamically awkward and very expensive. Many units are currently priced as much as $75,000, putting personal ownership out of reach for all but the extremely wealthy.
Sense Photonics offering “flexible” flash lidar. Based on a unique ‘flash’ architecture, Sense Photonics’ systems have no moving parts and do not scan at all. This approach enables high resolution across wide horizontal and vertical fields-of-view without compromising frame rate. In addition, the company’s camera-inspired design is highly manufacturable and enables small, customizable form factors critical for seamless integration into vehicles. Not only that, but by separating the laser emitting part and the sensor that measures the pulses, Sense’s lidar could be simpler to install without redesigning the whole car around it.
“It starts with the laser emitter,” he said. “We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.” These tiny laser elements are stuck on a flexible backing, meaning the array can be curved — providing a vastly improved field of view. Lidar units (except for the 360-degree ones) tend to be around 120 degrees horizontally, since that’s what you can reliably get from a sensor and emitter on a flat plane, and perhaps 50 or 60 degrees vertically. “We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal,” said Burroughs proudly. “And that’s something auto makers we’ve talked to have been very excited about.”
Here it is worth mentioning that lidar systems have also begun to bifurcate into long-range, forward-facing lidar (like those from Luminar and Lumotive) for detecting things like obstacles or people 200 meters down the road, and more short-range, wider-field lidar for more immediate situational awareness — a dog behind the vehicle as it backs up, or a car pulling out of a parking spot just a few meters away. Sense’s devices are very much geared toward the second use case for the present, though the company is working on a long-range version of its hardware as well.
Particularly because of the second interesting innovation they’ve included: the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.
The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.
Most lidars output a 3D point cloud, the result of the beam finding millions of points with different ranges. This is a very different form of “image” than a traditional camera, and it can take some work to convert or compare the depths and shapes of a point cloud to a 2D RGB image. Sense’s unit not only outputs a 2D depth map natively, but that data can be synced with a twin camera so the visible light image matches pixel for pixel to the depth map. It saves on computing time and therefore on delay — always a good thing for autonomous platforms.
Shauna McIntyre, Sense’s CEO and an alumni of Google’s automotive services program, says much of the same capability can be achieved with two features similar to those found in the latest iPhone and iPad. The first is something called a vertical-cavity surface-emitting laser (VCSEL for short). The second is called a single-photon avalanche detector (SPAD).
Together they form what Sense Photonics calls flash LiDAR. Here’s the primary advantage: unlike semiconductor-based lasers that are hard to assemble onto a circuit board, the VCSELs emit lasers perpendicular to the chip from the vertical cavity. That makes them easier to gather into an array on the chip, creating a solid state system, meaning it has no moving parts such as the rotating coffee can.
“It is more like a camera that can be integrated into the design language of the vehicle,” said McIntyre in a recent telephone interview. “We have an array and emitter that can be in the dash, in a bumper or above the dash. We prefer having them lower such as in the bumper or the grille.”
Another Israeli start-up, Opsys Tech, has been impressing AV development teams with the performance and potentially low cost of its lidar, aimed at driver-assist and autonomous applications. “We’ve created a new category of laser scanning that we call ‘microflash,’” said company executive chairman, Eitan Gertel. Opsys’ technology employs the same amount of light as a flash lidar, but narrows it to one pixel – 1,000 times smaller than what a flash lidar can do. Its patented electronic architecture uses Vertical Cavity Surface Emitting lasers (VCSEL), a type of semiconductor laser diode as its pulsed light source, and a single-chip solid-state photodetector (SPAD) receiver.
“We measure light 16 times faster than competitor lidars,” Gertel told SAE International. “We scan the field of view at 1,000 frames per second, but automotive guys have no real use for 1,000 fps, so we average the data down to 30 fps.” He said the Opsys Tech lidar achieves four times the range of flash lidar while surpassing flash lidar’s resolution and scanning rate. Gertel claimed tests have demonstrated range of greater than 200 meters and 0.1° x 0.1° resolution, “while maintaining full scanning rate and range across the full FoV.”
Gertel said he expects Apple’s high-volume use of VCSELs in its new iPhone lidar to enable Opsys Tech’s device to price out at $1,000 per vehicle, or $250 per sensor. “They appear to have a strong combination of range, resolution, high scanning rate and prospects for a significant cost reduction,” TTP’s Jellicoe said. “They’re very experienced in scaling products and know what they’re doing.”
References and Resources also include: