Home / Industry / DARPA developing software-reconfigurable Passive and Active (LIDAR) imaging sensors from ultraviolet (UV) through very long-wave infrared (VLWIR)

DARPA developing software-reconfigurable Passive and Active (LIDAR) imaging sensors from ultraviolet (UV) through very long-wave infrared (VLWIR)

Digital cameras use a focal plane array (FPA) to convert light energy into electrical energy that can be processed and stored. The FPA is a two-dimensional (2-D) array of photodetectors (or pixels) fabricated on an electro-optical material. Modern digital cameras contain FPAs that have pixel counts on the order of megapixels.


Today’s imaging systems primarily perform only a single or limited set of measurements due in part to the underlying readout integrated circuits (ROICs), which sample the signal of interest and transfer these values off of the chip. ROICs are typically designed for a very specific mode of operation, and in essence are application specific integrated circuits (ASICs).


Compared to single-band sensors, the capability to detect scene radiance in both the MWIR and LWIR spectral bands offers important advantages over a wide range of weather conditions, and in the presence of battlefield obscurants, and/or active infrared countermeasures.


Now U.S. military researchers are working with four defense contractors to develop concepts and demonstrate architecture for software-reconfigurable multi-function imaging sensors.  The resulting camera technology will incorporate functions that are normally not accessible within a single focal plane array (FPA) by configuring regions-of-interest (ROIs) that operate independently of other regions of the array, and by reconfiguring the measurements being made in the imaging array in response to the scene.


An imaging system that autonomously extracts the most relevant information, using a single sensor, and based only on the context in the scene would revolutionize a wide variety of military and commercial applications.


Reconfigurable capability also could enable users to optimize this imaging sensor for any spectral band, such as ultraviolet (UV) through very long-wave infrared (VLWIR). Separate regions of the focal plane array could run separately at high resolution, or at a high frame rate. In this way, the sensor could perform real-time analysis on much more complex scenes than traditional systems to produce more actionable information to the warfighter than ever has been possible from a single imaging sensor.


Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., have awarded contracts to DRS Network & Imaging Systems LLC in Melbourne, Fla.; Voxtel Inc. in Beaverton, Ore.; The BAE Systems Electronic Systems segment in Merrimack, N.H.; and the Lockheed Martin Corp. Missiles and Fire Control segment in Orlando, Fla., for the Reconfigurable Imaging (ReImagine) program.


This requires the development of a software-configurable array that enables simultaneous and distinct imaging modes in different ROIs. This would provide capabilities that previously required multiple sensors. It also requires algorithms that adapt the sensor configuration in real time based on context, and creates a consistent marketplace for information that seeks to maximize the value of making one measurement relative to the cost of missing others.


The project seeks to develop a software reconfigurable multimodal imaging system whose function is usually not accessible within a focal plane array: the reconfigurable area can operate not only in other areas but also on the reconstructed array of measurements The idea is to develop an imaging focal plane array that can accommodate different conditions and modes of operation, collecting the most valuable information in the scene. Similar to the functions of a field programmable gate array (FPGA) processor, reconfigurable imaging sensors can be defined by using several imaging modes that may be designed after the array is designed.


An imaging system that autonomously extracts the most relevant information with one sensor, and based only on the context in the scene, would revolutionize a wide variety of military and commercial applications, experts say. A software-configurable array that enables simultaneous and distinct imaging modes in different regions of interest might be able to do this.




On May 30, Lockheed Martin received a contract for a potential reconfigurable imaging project worth $ 10.2 million; on June 5, DRS received a $ 10.1 million potential contract; on 1 June, BAE Systems received Potential $ 7.5 million contract; May 30, Voxtel received a potential $ 5.2 million contract. (Ministry of Industry and Information Technology Electronics First Institute Xu Wenqi)

Software Reconfigurable Multifunction Imaging Sensors Program

Over the last decade, the emergence of imaging arrays with in-pixel analog-to-digital conversion (ADC) has enabled innovative concepts for FPAs with wide dynamic range and in-pixel processing. Similar pixel architectures have been used for high performance light detection and ranging (LIDAR) measurements with both framed and asynchronous operation. However, pixel pitches for arrays that both digitize and accumulate signals in the pixel remain at 20 μm or larger, and these designs are typically fixed-logic ASICs.


Using an advanced node complementary metal-oxide semiconductor (CMOS) process provides an opportunity to both reduce pixel pitch and also insert sufficient programmable logic to enable a software definable platform. In addition, separating the analog components that interface with the detector into a separate layer with per-pixel interconnects introduces the ability to customize an application agnostic all-digital layer for a wide range of applications.


The ReImagine program aims to demonstrate that a single ROIC architecture can be configured to accommodate multiple modes of imaging operations that may be defined after the chip has been designed. With the use of 3-D integration, it will be possible to customize the sensor to interface with virtually any type of imaging sensor (e.g. photodiode, photoconductor, avalanche photodiode, or bolometer) and to optimize it for any spectral band (e.g. ultraviolet (UV) through very long-wave infrared (VLWIR)).


In addition to multiple passive imaging functions, the ability to incorporate range detection into a high resolution, low noise imaging system offers a potential revolutionary capability. LIDAR systems today are predominantly scanning devices that contain large moving components and do not provide high quality context imagery. 2-D imaging LIDAR systems have been demonstrated and are able to acquire 3-D imagery in framing or asynchronous modes. Both direct detect and coherent receiver arrays have been demonstrated, each with distinct advantages for different applications. However, in all cases, high data rates limit the spatial resolution of the sensor, and the demonstration of both passive imaging and active LIDAR modes in a large (> 1 MPixel) array has not been demonstrated. A ReImagine dual-mode sensor would provide the ability to collect high data rate LIDAR measurements within a configurable ROI, while continuing to measure passive context imagery.


  1. Technical Area 1 (TA1): Single or multi-color passive imager architecture and algorithms

TA1 aims to design and develop a single or multi-color passive camera architecture and supporting algorithms, which can support a variety of technical objectives that are not currently possible from a single FPA. Spectral bands of interest span from UV to the very long-wave infrared (VLWIR), or wavelengths approximately 0.25 μm – 14 μm, and should be driven by the proposed application. Multi-color imagers may be designed to integrate the signal from different spectral bands either simultaneously or consecutively.


  1. Technical Area 2 (TA2): Hybrid active/passive imager architecture and algorithms

TA2 aims to design and develop a hybrid active/passive imager architecture, where passive mode operation is based on traditional intensity measurements across an image array, and active mode is based on time-of-flight (TOF) measurements for 3-D range information (e.g. LIDAR mode).

Moreover, the array can be configured to perform active mode measurements in specific ROIs, while simultaneously operating in passive mode in the remainder of the array. While TA2 efforts should demonstrate 3-D mode operation with integrated laser sub-systems, TA2 proposals should leverage existing laser sources and pointing systems. Proposals for other types of active mode imaging are of interest and should apply to TA2.


  1. Technical Area 3 (TA3): Innovative concepts for imaging systems with internal feedback

TA3 will explore adaptive algorithms for reconfigurable imaging systems. The flow of information in today’s imaging systems is exclusively from the sensor to image processing and/or the user, and object, gesture, or activity recognition algorithms use data with parameters that do not change over time. The ReImagine architecture endeavors to provide an imaging system that can change the nature of data being measured, either spatially, temporally, or spectrally; either as intensity or time; and either frame-, change-, or event-driven. TA3 proposals should explore new concepts in active learning that can determine the type of data that should be collected, both as a function of location and time. The algorithms should maximize information content and enable decisions, based on the context of the scene and the predicted value of various types of data.



References and Resources also include:



About Rajesh Uppal

Check Also

Integrated Microwave Photonics (IMWP): Bridging the Gap Between Optics and RF Technologies

Introduction: In the realm of telecommunications and wireless systems, the demand for high-performance Radio Frequency …

error: Content is protected !!