Home / Military / LIDAR technologies for Autonomous vehicles, battlefield visualization, weapon guidance and anti-submarine warfare

LIDAR technologies for Autonomous vehicles, battlefield visualization, weapon guidance and anti-submarine warfare

Remote sensing means that we aren’t actually physically measuring things with our hands. We are using sensors that capture information about a landscape and record things that we can use to estimate conditions and characteristics. LiDAR, or light detection ranging (sometimes also referred to as active laser scanning) is one remote sensing method that can be used to map structure including vegetation height, density and other characteristics across a region.

 

Lidars (Light Detection and Ranging) are similar to radars in that they operate by sending light pulses to the targets and calculate distances by measuring the received time. Since they use light pulses that have about 100,000 times smaller wavelength than radio waves used by radar, they have much higher resolution.  When LIDAR is mounted on Aircraft, these measurements are made using the key components of a lidar system including a GPS that identifies the X,Y,Z location of the light energy and an Internal Measurement Unit (IMU) that provides the orientation of the plane in the sky.

 

LIDARs are rapidly gaining maturity as very capable sensors for number of applications such as imaging through clouds, vegetation and camouflage, 3D mapping and terrain visualization, meteorology, navigation and obstacle avoidance for robotics as well as weapon guidance.  They have also proved useful for disaster management missions; emergency relief workers could use LIDAR to gauge the damage to remote areas after a storm or other cataclysmic event. After the January 2010 Haiti earthquake; a single pass by a business jet flying at 10,000 feet over Port-au-Prince was able to display the precise height of rubble strewn in city streets enabled by 30 centimeters resolution LIDAR. LIDAR data is both high-resolution and high-accuracy, enabling improved battlefield visualization, mission planning and force protection. LIDAR provides a way to see urban areas in rich 3-D views that give tactical forces unprecedented awareness in urban environments.

 

LiDAR is an optical sensor technology that enables robots to see the world, make decisions and navigate. Robots performing simple tasks can use LiDAR sensors that measure space in one or two dimensions, but three-dimensional (3D) LiDAR is useful for advanced robots designed to emulate humans. One such advanced robot is a self-driving car, where the human driver is replaced by LiDAR and other autonomous vehicle technologies. 3D LiDAR systems scan beams of light in three dimensions to create a virtual model of the environment. Reflected light signals are measured and processed by the vehicle to detect objects, identify objects, and decide how to interact with or avoid those objects.

 

LiDAR is commonly used for making high-resolution maps and has applications in geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics, laser guidance, airborne laser swath mapping, and laser altimetry. It is also being used for control and navigation of some autonomous cars.

 

LIDARs are rapidly gaining maturity as very capable sensors for number of applications such as imaging through clouds, vegetation and camouflage, 3D mapping and terrain visualization, meteorology, navigation and obstacle avoidance for robotics as well as weapon guidance. LIDAR data is both high-resolution and high-accuracy, enabling improved battlefield visualization, mission planning and force protection. LIDAR provides a way to see urban areas in rich 3-D views that give tactical forces unprecedented awareness in urban environments.

Operational  Requirements

Autonomous systems, such as self-driving cars and some robots, operate without human intervention. Autonomy requires that the system be able to do the following: Sense the environment and keep track of the system’s current state and location; Perceive and understand disparate data sources; Determine what action to take next and make a plan; and Act only when it is safe to do so, avoiding situations that pose a risk to human safety, property or the autonomous system itself. Sensors help an autonomous system place its location, identify informational signs and avoid obstacles and other hazards.

 

To successfully perform various missions, both high spatial resolution and range precision are needed, and the system also often needs to obtain high resolution 3D images in a relatively short period of time. The top performance features to use in looking at LiDAR technologies are field of view, range, resolution, and rotation/frame rate. These are the capabilities needed to guide an autonomous vehicle reliably and safely through the complex set of driving circumstances that will be faced on the road.

 

Field of View: It is widely accepted that a 360° horizontal field of view – something not possible for a human driver – is optimal for safe operation of autonomous vehicles. Having a wide horizontal field of view is particularly important in navigating the situations that occur in everyday driving. If an autonomous vehicle employs sensors with a more limited horizontal field of view, then more sensors are required and the vehicle’s computer system must then stitch together the data collected by these various sensors.

 

Vertical field of view is another area where it is important that LiDAR capabilities match real-life driving needs. LiDAR needs to see the road to recognize the driveable area, avoid objects and debris, stay in its lane, and change lanes or turn at intersections when needed. Autonomous vehicles also need LiDAR beams that point high enough to detect tall objects, road signs, overhangs, as well as navigating up or down slopes.

 

Range.  Autonomous vehicles need to see as far ahead as possible to optimize safety, hence LIDAR should be long range. At highway speeds, a minimum range of 200 meters allows the vehicle the time it needs to react to changing road conditions and surroundings. Slower, non-highway speeds allow for sensors with shorter range, but vehicles still need to react quickly unexpected events on the roadway such as a person focused on a cellphone stepping onto the street from between two cars, an animal crossing the road, an object falling from a truck, and debris ahead in the road. In each of these situations, onboard sensors need sufficient range to give the vehicle adequate time to detect the person or object, classify what it is, determine whether and how it is moving, and then take steps to avoid it while not hitting another car or object.  The signal power at the receiver is inversely proportional to the square  of the distance between the sensor and the target. Therefore, the maximum operating range of the LiDAR is primarily determined by the sensitivity of the LiDAR receiver, or the lowest permissible SNR to still satisfy the ranging precision requirement.

 

Resolution. High-resolution LiDAR is critical for object detection and collision avoidance at all speeds. Finer resolution allows a sensor to more accurately determine the size, shape, and location of objects, with the most advanced LiDAR sensors being able to detect objects within 3 centimeters and some moving closer to 2 centimeters. This finer resolution outperforms even high resolution radar and provides the vehicle with the clearest possible vision of the roadway. LiDAR resolution is determined by the pulse width: shorter, sharper pulse means wider signal bandwidth, which results in finer ranging resolution.

 

Frame rate: Frame rate primarily determines how fast the object can move without inducing significant motion blur. Since higher frame rate and shorter measurement window results in lower SNR, there is a direct trade-off between the frame rate and the maximum detection range in SNR-limited systems.

 

Eye Safety and Maximum Emission Power: The maximum emission power from a LiDAR is primarily restricted by the IEC laser safety standard, and most consumer products are designed for class 1 eye safety.  The eye safety requirements are related not only to the absolute beam power density but also to other factors including wavelength, exposure time, and pulse duration in the case of pulsed lasers.

 

Size, Weight, and Power-Cost (SWaP-C): Finally, it is crucial to consider if the typical size, weight, and power consumption of the system are reasonable in the context of particular application. Moreover, the operation/repair cost over the device lifetime must be taken into account for total cost calculation.

 

Scanning or spinning type

There are two methods that LIDAR systems typically employ to obtain 3D images, a scanning type and a flash type. Spinning LiDARs are best identified by the telltale dome on the roof of a connected car, spinning systems have been the prevailing form of the technology.

Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Google’s self-driving cars used Velodyne rotating LIDAR HDL-64E module that stacked 64 lasers in a vertical column and spun the whole thing around many times per second . It was used to identify oncoming vehicles like cars, bicycles, and pedestrians and also detects small hazards close by, on the ground.

Rotating LiDAR captures a wide field of view and a long-range, but its components tend to be heavy and expensive, and the moving parts require regular maintenance.  Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.

The scanning LIDAR system uses one or a few detector pixels and a scanner to acquire 3D images. Laser pulses are sent out from a laser system, and each laser pulse is directed to a different point on the target by a scanner; then its time-of flight (TOF) is obtained for each target point, using a single detector pixel. The scanning LIDAR system requires a significant time overhead to acquire a high resolution 3D image, because the system needs to scan each point. This means the system requires increasingly more time to take measurements, to obtain ever higher resolution 3D images.

 

Damon Lavrinc of Ouster, another spinning LiDAR company, says that while spinning/mechanical systems are still the gold standard, the trend is from moving parts to silicon. “We’re taking all of old components used by legacy LiDAR companies and distilling them onto chips,” he says. And Moore’s Law is taking hold in LiDAR. In just a couple of years they moved from 16 to 32 to 64 beams of light, meaning a doubling of resolution at each increase. “And at CES, we introduced 128, the highest resolution possible.”

 

Quanergy, of Sunnyvale, Calif., also makes a spinning product. According to Chief Marketing Officer Enzo Signore, although the company is innovating with optical arrays in high-end applications, Signore says spinning LiDAR, used in its M8 sensor, typically suits security applications. That product, he says, has a 360-degree horizontal field of view and generates 1.3 million pulses per second in the point cloud. Paired with its Qortex machine learning software, Signore says that its solution is designed to classify people and vehicles, such as identifying how many people are at an airport security checkpoint. A pharmaceutical client, for example, uses it along a 400 meter fence line at a storage facility, having rejected radar due to interference issues and on-fence motion sensors due to high winds. “If an intruder is, say, 70 meters away, the LiDAR will see that person and assign an ID to that person and we can track him all around,” Signore says. “If he then enters a forbidden zone, such as too near an electric switch, you can point a camera at him.”

 

Flash LIDAR

On the other hand, a flash LIDAR system utilizes a 2D array detector and a single laser pulse, illuminating the entire interesting scene to acquire 3D images. To acquire real-time images of moving targets, it is necessary to obtain the 3D image with a single laser pulse. Flash LIDAR systems can obtain images with just a single laser pulse, which makes it possible for them to obtain 3D images of moving targets. Moreover, 3D images can be acquired even when the LIDAR system or the target are in motion.

 

This method is less prone to image vibration and produces a much faster data capture rate. Texas Instruments notes two downsides, however. For one, reflected light blinds sensors. In addition, great power is necessary to light a scene, especially at distance. Most flash LiDAR, made by manufacturers such as Continental Automotive and LeddarTech, is being applied to autonomous vehicles. Sense Photonics, of Durham, N.C., is an exception. According to Erin Bishop, director of business development, SensePhotonics has a solution – Sense One – for industrial applications, which promises up to 40 meters of range outdoors for imaging large workspaces or perimeters, 7.5x higher vertical resolution than other sensors and a 2.5x wider vertical field of view (75 degrees compared to 30 degrees).

 

Critical Technologies of LIDAR

The  components of a LiDAR system, include  laser sources and modulators, LiDAR receivers, beam-steering approaches, and LiDAR processing.

Lasers

The Lasers are categorized by their wavelength. Airborne Light Detection and Ranging systems use 1064nm diode-pumped Nd: YAG lasers whereas Bathymetric systems use 532nm double diode-pumped Nd: YAG lasers which penetrate into the water with less attenuation than the airborne system (1064nm). Better resolution can be attained with shorter pulses provided the receiver detector and electronics have sufficient bandwidth to manage the increased data flow.

 

One of the critical technology requirement are Microchip lasers that are safe for eyes at higher pulse powers and that can operate across a wide range of wavelengths and provide high pulse repetition rates. Meanwhile, new advances in chip-based arrays of emitters could make it easier to send out light without spinning parts.

 

 

Scanners and Optics

The speed at which images can be developed is affected by the speed at which it can be scanned into the system. A variety of scanning methods is available for different resolutions such as azimuth and elevation, dual axis scanner, dual oscillating plane mirrors, and polygonal mirrors. The type of optic determines the range and resolution that can be detected by a system.

 

https://www.youtube.com/watch?v=h7nHfaY6He0

Solid State Lidar

However, the high cost and low reliability of LiDAR systems have been fundamental barriers to the adoption of self-driving vehicles, according to Steve Beringhause, Executive Vice President and Chief Technology Officer, Sensata Technologies. The high quality optical components including lasers  raises the price of LiDAR sensors  from around several thousand dollars up to $85,000 per sensor. The sensors have fallen in price by about 70 percent since the autonomous arms race began. But sensors still cost thousands of dollars, and many self-driving platforms require more than one.

 

However with arrival of solid state LiDAR sensors the price is expected to decrease by many orders of magnitude. This new design will be less expensive, easier to integrate due to their smaller size, and more reliable as a result of fewer moving parts. Quanergy have announced that they are about to begin full scale manufacturing of their S3, the world’s first affordable solid-state LiDAR, coming in at about $250.

 

Photodetector And Receiver Electronics

The photodetector is a device that reads and records the backscattered signal to the system. There are two main types of photodetector technologies, solid state detectors, such as silicon avalanche photodiodes and photomultipliers.

 

Enhancing Range through Sensitive detectors

Most lasers on lidar sensors today operate in the near-infrared range—905 nanometers is a popular wavelength. Because they’re close to the wavelength of visible light (red light starts around 780 nanometers), they could  cause damage to people’s eyes, hence the power level of 905 nanometer lasers is strictly regulated.

 

One alternative approach is to use eye-safe wavelengths. Luminar, for example, is developing a lidar product that uses 1,550nm lasers. Because this is far outside the visible light range, it’s much safer for people’s eyes. As IEEE Spectrum explains it, “the interior of the eye—the lens, the cornea, and the watery fluid inside the eyeball—becomes less transparent at longer wavelengths.” The energy from a 1,550nm laser can’t reach the retina, so the eye safety concerns are much less serious.

 

This allows 1,550nm lidars to use much higher power levels—IEEE Spectrum says 40 times as much—which naturally makes it easier to detect laser pulses when they bounce off distant objects. The downside here is that 1,550nm lasers and detectors aren’t cheap because they require more exotic materials to manufacture.

 

Another way to improve the range of lidar units is to increase the sensitivity of the detectors. Argo AI, Ford’s self-driving car unit, recently acquired Princeton Lightwave, a lidar company that uses highly sensitive detectors known as single-photon avalanche diodes. As the name suggests, these detectors are sensitive enough to be triggered by a single photon of light at the appropriate frequency.

 

Generally, two representative types of avalanche photodiode (APD) detectors are used in flash LIDAR systems. One detector is a linear mode APD focal plane array (FPA) with 128 x 128 pixel arrays . The other detector is a Geiger mode APD FPA with 256 x 256 pixel arrays. Due to the array size, when using these commercial sensors the spatial resolution or field of view (FOV) is limited. To acquire large scenes or high spatial resolution 3D images, commercial systems may need an additional scanner. In addition, each pixel requires its own high bandwidth timing circuit to measure the TOF.

 

Focal Plane Arrays

Another critical technology for development of 3D imaging laser radars is Focal Plane Array (FPA) detectors with timing capability in each pixel. A larger focal plane array (FPA) that supports larger fields of view can illuminate an entire scene with a single pulse, similar to the effect produced by a traditional flash on a camera. Flash LiDAR applications drive the development of large, highly sensitive detector arrays that integrate detection, timing and signal processing.

 

MIT’s Lincoln Laboratory has developed a array of more than 4,096 X 4,096 pixels FPA based on indium gallium arsenide semiconductor, which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and thus longer ranges for airborne laser scanning.

 

“Beam-shaping adaptive optics will optimize laser light to maximize the number of photons received at each focal plane array pixel. Adaptive optics can correct atmosphere-induced phase distortion of optical signals. On a related note, diffractive optical elements represent a complementary technology to optimize beam shape based on FPA configuration,” writes Felton A. Flood in SIGNAL.

 

LiDAR imaging systems generate huge amount and require advanced data processing techniques and compression algorithms to support 3-D image visualization and feature extraction. Advanced hardware and algorithms are required to manage large amounts of data in real time at the speeds required to support complex autonomous navigation and decision making.

 

New LiDAR Technology Allows for Cheaper Autonomous Vehicles, reported in March 2022

The chip, which is based on a focal plane switch array (FPSA), works pretty much like the sensors found in digital cameras. This comprises a semiconductor-based matrix of micrometer-scale antennas that gathers light.

The FPSA solution is not new, but the technology available only allowed for a limited resolution of 512 pixels or less, which is inappropriate for autonomous driving applications. The chip developed at UC Berkeley has a resolution of 16,384 pixels, and the design can be scaled up to megapixel sizes using the same complementary metal-oxide-semiconductor (CMOS) technology used to produce computer processors. This could lead to a new generation of powerful, low-cost 3D sensors that can be used in autonomous vehicles and other devices.

 

The FPSA chip consists of a matrix of tiny optical transmitters or antennas, and switches that rapidly turn them on and off. This way, it can channel all available laser power through a single antenna at a time. Existing switches are thermo-optic-based, being large and power-hungry, which explains the limited resolution of 512 pixels. The researchers at UC Berkeley replaced them with microelectromechanical system (MEMS) switches that physically move the waveguides from one position to another.

 

MEMS switches are also old technology, and this is the first time it has been applied to LiDAR. They are much smaller, use far less power, switch faster, and have very low light losses compared to thermo-optic switches. This explains how the new chip is capable of a resolution of 16,384 pixels, which effectively covers a 70-degree field of view. Mounting several sensors into a circular configuration would produce a 360-degree view around the vehicle, just like the spinning cylinder sensors do.

 

Air Force Research Lab chooses Princeton Infrared to develop ladar detector arrays for military applications: May 2019

Princeton Infrared Technologies Inc. in Monmouth Junction, N.J., is developing detector arrays for military coherent laser detection and ranging (ladar) sensors. The company is working under terms of a $750,000 phase-two Small Business Innovation Research (SBIR) contract from the U.S. Air Force Research Laboratory at Wright-Patterson Air Force Base, Ohio.

 

Princeton Infrared is focusing on developing detector arrays using multi-quantum well materials enabling detection from 0.9 to 2.4 microns with low-dark current and high-quantum efficiency, company officials say. These ladar detector arrays will enable a new generation of high-resolution, high-speed cameras that can image near, room temperature at high sensitivity in the shortwave-infrared (SWIR) spectrum using arrays detectors, instead of single-element detectors.

 

The SBIR phase-two project concentrates on new materials development. Princeton Infrared will research new multi-quantum well materials and strained-superlattice materials manufactured on indium phosphide (InP) substrates for enabling technologies for military applications.

 

Using multi-quantum well materials “will enable high-sensitivity detectors to image beyond what lattice-matched indium gallium arsenide detectors can detect in the SWIR range,” says Martin Ettenberg, president of Princeton Infrared. “These next-generation detector arrays will benefit long-range ladar used by the Air Force to identify targets,” Ettenberg says. “Current systems require cryogenic cooling while these materials will not, thus vastly lowering costs, size, weight, and power.”

 

Navigation And Positioning Systems/GPS

When a Light Detection and Ranging sensor is mounted on an aeroplane satellite or automobiles, it is necessary to determine the absolute position and the orientation of the sensor to maintain useable data. Global Positioning Systems (GPS) provide accurate geographical information regarding the position of the sensor and an Inertial Measurement Unit (IMU) records the accurate orientation of the sensor at that location. These two devices provide the method for translating sensor data into static points for use in a variety of systems.

 

With airborne Light Detection and Ranging, other data must be collected to ensure accuracy. As the sensor is moving height, location and orientation of the instrument must be included to determine the position of the laser pulse at the time of sending and the time of return. This extra information is crucial to the data’s integrity. With ground-based Light Detection and Ranging a single GPS location can be added at each location where the instrument is set up.

 

LIDAR Data Processing

The Light Detection and Ranging mechanism just collect elevation data and along with the data of Inertial Measuring Unit is placed with the aircraft and a GPS unit. With the help of these systems the Light Detection And Ranging sensor collect data points, the location of the data is recorded along with the GPS sensor. Data is required to process the return time for each pulse scattered back to the sensor and calculate the variable distances from the sensor, or changes in land cover surfaces.

 

After the survey, the data are downloaded and processed using specially designed computer software (LIDAR point Cloud Data Processing Software). The final output is accurate, geographically registered longitude (X), latitude (Y), and elevation (Z) for every data point. The LIDAR mapping data are composed of elevation measurements of the surface and are attained through aerial topographic surveys. The file format used to capture and store LIDAR data is a simple text file. By using elevation points data may be used to create detailed topographic maps. With these data points even they also allow the generation of a digital elevation model of the ground surface.

 

Velodyne LiDAR, a developer of real-time LiDAR sensors, has announced a partnership agreement with Dibotics under which Dibotics will provide consulting services to Velodyne LiDAR customers who require 3D SLAM software. In robotic mapping, SLAM (simultaneous localization and mapping) involves the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a specific point within it.

 

Optical vortex for improved object detection and light transmission in turbid media

Scientists at the Naval Air Warfare Center’s Aircraft Division have developed a novel method and device for improving light transmission and object detection in light-scattering (turbid) media like hazy water and fog, as reported in Sep 2020. The Navy’s inventions address both of these issues and it has applied for U.S. patents on the technology. In this innovation, Navy scientists use the spatial structure of the beam itself to aid in object and clutter discrimination by exploiting the optical properties of Orbital Angular Momentum (OAM). Deploying a spiral phase plate in front of a CCD camera, simple diffractive optics can impart OAM onto a beam of light.

 

Optical beams with such a helical phase result in an intensity vortex that serves as a coherence (object) filter, to spatially separate object returns from clutter. The results of this work show high agreement with theoretical models of attenuation, the ability to detect object returns 100-1000x below clutter levels, and a simple method of detecting coherent light without complicated optical heterodyning. Potential commercial applications of the technology include improved devices and techniques for optical communications, transmissometry, and LiDAR in air, water, and space. There are a wide range of commercial and military applications including runway safety, chemical and biosensing, and tissue imaging.

 

Military applications may include helicopter guidance in a brownout, non-acoustic mine and explosive ordinance detection, non-acoustic anti-submarine warfare, and covert broadband communications.

 

Beam Steering

Beam steering is often the defining component of a lidar system, determining performance, size, and reliability. First-generation lidar systems use rotating assemblies of lasers, optics, and detectors, while second-generation lidar systems are beginning to use MEMS-based mirrors—each approach has substantial limitations.

 

The ideal beam-steering component of a lidar system has challenging and, at times, contradictory requirements. The two most-critical requirements are the optical aperture size and the scanning field of view. A large optical aperture is critical to the performance of the lidar system for two reasons. On receive, the aperture size determines the light-gathering capability of the system and thus drives the range, resolution, and frame rate. On transmit, a large aperture enables a larger-diameter laser beam in the near field of the system so that more laser power can be transmitted without exceeding the eye-safe power density.

 

Automotive lidar presents a unique challenge because these large-etendue systems require a large product of aperture size and field of view. In mechanical-scanning systems, the aperture size and field of view are often coupled: as aperture size increases, the field of view decreases in resonant scanning systems such as MEMS mirrors and galvanic mirrors.

 

In addition to large aperture size and field of view, it is highly desirable to have a beam-steering component that is truly solid-state and semiconductor-based. In combination with low-cost semiconductor lasers and detectors, the use of a solid-state beam-steering device can endow lidar systems with high reliability, low cost, and high manufacturability. Optical phased arrays implemented with silicon photonics represent one of the few existing solid-state beam steering technologies, but this approach exhibits limited aperture size and optical efficiency.1 Other approaches include liquid-crystal waveguides2 and traditional liquid-crystal-on-silicon (LCOS) spatial light modulators, which have limited switching speed and field of view.

 

Phased arrays or Self-Sweeping Laser Technology

Todays LIDAR systems use lasers, lenses, external receivers and require mechanical wear components for beam steering that can limit scan rates, increase complexity and impact long-term reliability. Plus, they can cost between $1,000 and $70,000. Phased array is a row of transmitters that can change the direction of an electromagnetic beam by adjusting the relative phase of the signal from one transmitter to the next. Phased arrays allow non-mechanical beam steering in one dimension. To steer the beam in the second dimension, these systems typically use a grating array that works like a prism, changing the direction of light based on its frequency. MIT researchers have developed a 64-by-64 dimensional nanophotonic phased array (NPA) of silicon antennas that could take a single laser beam and send it wherever the user wanted by tweaking voltages on the chip.

 

A team of UC Berkeley engineers led by Connie Chang-Hasnain, a professor of electrical engineering and computer sciences used a novel concept to automate the way a light source changes its wavelength as it sweeps the surrounding landscape. In LIDARs, as the laser moves along, it must continuously change its frequency so that it can calculate the difference between the incoming, reflected light and the outgoing light. This requires a precise movement of mirrors within the laser cavity. To change the frequency, at least one of the two mirrors in the laser cavity must move precisely.

 

“The mechanisms needed to control the mirrors are a part of what makes current LIDAR and OCT systems bulky, power-hungry, slow and complex,” says study lead author Weijian Yang, “The faster the system must perform — such as in self-driving vehicles that must avoid collisions — the more power it needs.”

 

The novelty of the new design is the integration of the semiconductor laser with an ultra-thin, high-contrast grating (HCG) mirror. The HCG mirror, consisting of rows of tiny ridges, is supported by mechanical springs connected to layers of semiconductor material. The force of the light, an average force of just a few nanonewtons, causes the top mirror to vibrate at high speed. The vibration allows the laser to automatically change color as it scans.

 

Each laser can be as small as a few hundred micrometers square, and it can be readily powered by an AA battery. “Our paper describes a fast, self-sweeping laser that can dramatically reduce the power consumption, size, weight and cost of LIDAR devices on the market today,” says Chang-Hasnain, chair of the Nanoscale Science and Engineering Graduate Group at UC Berkeley. “The advance could shrink components that now take up the space of a shoebox down to something compact and lightweight enough for smartphones or small UAVs [unmanned aerial vehicles].”

 

Quanergy is one of the few companies to use this approach, having just introduced its S3-2 sensor for facility access control and people counting.

 

SiLC Technologies launches new Silicon Photonics LiDAR chip

Though pulsed LiDAR is the most widely-used in the industry, it’s maximum range is dictated by its peak laser power, resulting in bright flashes of laser light that have been shown to be more detrimental to sensors than the newer FMCW techniques. FMCW solutions transmit at more than three orders of magnitude less peak laser power than current pulsed LiDAR solutions while providing improved range, instantaneous velocity and multi-user interference free operation.

 

SiLC Technologies, a silicon photonics-based provider of integrated 4D vision LiDAR solutions,  announced in Jan 2019 initial performance results for its integrated FMCW LiDAR test chip. SiLC’s low-energy consumption design. SiLC’s initial data demonstrates a 112mdetection range with just 4.7mW of optical peak power. This is very promising, given that traditional pulsed LiDAR systems operate at peak laser power levels in the hundreds, and even thousands, of Watts.

 

“Because our FMCW chip design requires a fraction of the power typically required for LiDAR, it is inherently more camera- and eye safe,” said Mehdi Asghari, CEO of SiLC Technologies. “Our integrated chip is a lidar for everything – a solution that can be applied, safely, within any application, and wherever you need it.”

 

A long range FMCW LiDAR demands very high-performance levels from the optical components needed. The cost of these components has in turn limited commercial deployment of such systems. SiLCs technology platform is able to offer the performance levels required across all key performance specs including very low loss, ultra-low phase noise, polarization independent operation, low back reflection and high optical power handling. In addition, SiLC’s silicon photonics technology enables integration of all key optical functions into a small and low-cost silicon chip. SiLC utilizes silicon wafer fabrication processing technology, which enables complex electronics chips to be cost effectively mass manufactured for consumer applications to manufacture low cost and high-volume complex optical devices such as LiDARs.

 

“We’re seeing significant market interest from automotive to augmented reality and biometrics. We believe this interest level will continue to rise as these industries increasingly depend on more sophisticated 3D vision systems. It’s our goal to become the LiDAR industry’s economic bridge to achieving best in class accuracy and range at a cost that will enable mass manufacturing – a true lidar for everything. It’s what our team does and a vision we’ve successfully executed on in telecom and data center optics for more than 20 years,” concluded Asghari.

 

MIT and DARPA Pack Lidar Sensor onto Single Chip

A team at MIT’s Photonic Microsystems Group have integrated LIDAR systems onto a single microchip that can be mass produced in commercial CMOS foundries yielding a potential on-chip LIDAR system cost of about $10 each. Instead of a mechanical rotation system, optical phased arrays with multiple phase controlled antennas emitting arbitrary beam patterns may make devices more robust.

 

MIT’s Photonic Microsystems Group is trying to take these large, expensive, mechanical lidar systems and integrate them on a microchip that can be mass produced in commercial CMOS foundries. At MIT our lidar on a chip work first began with the development of 300-mm silicon photonics, making their potential production cost on the order of $10 each at production volumes of millions of units per year.

 

These on-chip devices promise to be orders of magnitude smaller, lighter, and cheaper than lidar systems available on the market today. They also have the potential to be much more robust because of the lack of moving parts. The non-mechanical beam steering in this device is 1,000 times faster than what is currently achieved in mechanical lidar systems, and potentially allows for an even faster image scan rate. This can be useful for accurately tracking small high-speed objects that are only in the lidar’s field of view for a short amount of time, which could be important for obstacle avoidance for high-speed UAVs.

 

Our device is a 0.5 mm x 6 mm silicon photonic chip with steerable transmitting and receiving phased arrays and on-chip germanium photodetectors. The laser itself is not part of these particular chips, but our group and others have demonstrated on-chip lasers that can be integrated in the future. In order to steer the laser beam to detect objects across the LIDAR’s entire field of view, the phase of each antenna must be controlled.

 

In this device iteration, thermal phase shifters directly heat the waveguides through which the laser propagates. The index of refraction of silicon depends on its temperature, which changes the speed and phase of the light that passes through it. As the laser passes through the waveguide, it encounters a notch fabricated in the silicon, which acts as an antenna, scattering the light out of the waveguide and into free space. Each antenna has its own emission pattern, and where all of the emission patterns constructively interfere, a focused beam is created without a need for lenses.

 

MIT’s current on-chip LIDAR system can detect objects at ranges of up to 2 meters, though they hope within a year to achieve a 10-meter range. The minimum range is approximately 5 cm, and the team has demonstrated centimeter longitudinal resolution and expect 3-cm lateral resolution at 2 meters. There is a clear development path towards LIDAR on a chip technology that can reach a 100-meter range, with the possibility of going even farther.

 

We believe that commercial lidar-on-a-chip solutions will be available in a few years say Christopher V. Poulton and Michael R. Watts.” A low-cost, low-profile lidar system such as this  would allow for multiple inexpensive lidar modules to be placed around a car or robot. These on-chip lidar systems could even be placed in the fingers of a robot to see what it is grasping because of their high resolution, small form factor, and low cost.”

 

DARPA’s SWEEPER, Lidar-on-a-Chip

DARPA’s Short-range Wide-field-of-view Extremely agile Electronically steered Photonic EmitteR (SWEEPER) program has successfully integrated breakthrough non-mechanical optical scanning technology onto a microchip. SWEEPER technology has demonstrated that it can sweep a laser back and forth using arrays of many small emitters that each put out a signal at a slightly different phase. The new phased array thus forms a synthetic beam that it can sweep from one extreme to another and back again more than 100,000 times per second, 10,000 times faster than current state-of-the-art mechanical systems.

 

It can also steer a laser precisely across a 51-degree arc, the widest field of view ever achieved by a chip-scale optical scanning system. The SWEEPER technology utilized a solid-state approach built on modern semiconductor manufacturing processes to place array elements only a few microns apart with each other that is required at optical frequencies. DARPA says there’s every reason to expect the technology to lend itself to mass production, lowering the cost per chips and their wide adoption like in cars, robocopters, e.t.c

 

Beam steering using a liquid crystal metasurface

Metasurfaces are a new class of optical devices based on two-dimensional arrays of subwavelength optical elements, which show promise in applications such as flat optics and optical beam steering. The metasurface elements act as subwavelength phase or amplitude modulators, which can be static or dynamic. Arrays of these elements act in transmission or reflection to encode arbitrary optical functionality such as focusing, steering, and other kinds of wavefront manipulation.

 

Part of the promise of these devices lies in their ability to perform these complex optical functions using metasurfaces that are manufactured using standard lithographic techniques common in the semiconductor industry. For example, a lens can be created by spatially varying the width of dielectric Mie resonators composed of silicon (Si) pillars, thus encoding a parabolic phase profile.

 

Beam steering is arguably the most critical and enabling part of the lidar system. Traditional lidar systems are based on mechanical scanning, which creates issues with reliability, cost, form factor, but most critically limits the performance of existing systems. Lumotive has developed a revolutionary beam steering technology called a liquid crystal metasurface which enables a truly solid-state lidar system with unprecedented resolution, range, and frame rate.

 

 

References and Resources also include:

http://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/mit-lidar-on-a-chip

https://www.elprocus.com/lidar-light-detection-and-ranging-working-application/

https://velodynelidar.com/newsroom/how-lidar-technology-enables-autonomous-cars-to-operate-safely/

https://www.militaryaerospace.com/defense-executive/article/14033507/air-force-research-lab-chooses-princeton-infrared-to-develop-ladar-detector-arrays-for-military-applications

https://www.autoevolution.com/news/breakthrough-in-lidar-technology-allows-for-cheaper-autonomous-vehicles-183977.html

 

About Rajesh Uppal

Check Also

Advancements in Thermometers: A Look at Photonic Thermometers

Introduction Thermometers have been a fundamental tool in our lives for centuries, serving as vital …

error: Content is protected !!