Home / Technology / Photonics / CMOS image sensor (CIS) advancements enabling big growth in Smartphone camera, machine vision, medical imaging and security applications

CMOS image sensor (CIS) advancements enabling big growth in Smartphone camera, machine vision, medical imaging and security applications

Innovations in CMOS image sensor (CIS) technology continue to enhance digital imaging landscape. While the demand has been driven by smartphone makers, leveraging enhanced photo-taking capabilities to differentiate their devices from the competition, there is also a growing market for applications in the automotive, security, medical, and manufacturing sectors. Smartphone camera technology is growing in leaps and bounds in the past couple of years and continues to be a major point to differentiate their products.  Image sensors are found at the heart of every camera system and convert incoming light into electronic signals.


At its most fundamental level, CIS technology is tasked with converting light from the camera lens into digital data to create a picture of what’s in view. When light energy in the visible light wavelength range of 400 to 700 nm is condensed on the photodiode (PD) of the silicon substrate, the silicon surface of a CMOS image sensor receives the light energy to form an electron-hole pair. The electron generated in this process is converted into voltage through floating diffusion (FD) and then into digital data through an analog-to-digital converter (ADC). The data is sent to a processor to create a digital description, usually an image, of what’s in view.


Depending on the image sensor used, cameras are termed charge-coupled devices (CCDs), electron-multiplying CCDs (EMCCDs), or complementary metal-oxide-semiconductor (sCMOS) devices. Both CMOS sensors and CCD sensors are semiconductors; each type of integrated circuit has integrated pixels for light—and therefore image—capture.  CCD-based cameras, however, often require some form of analog-to-digital (ADC) conversion circuitry. This addition increases the camera’s overall footprint, which may be critical when housing lack needed real estate.


As the name suggests, CMOS image sensors are fabricated in standard CMOS technology and can be made at a very low cost.  Most integrated circuits including computing and communication chips are however made in CMOS processes. This is a major advantage because it allows us to integrate the sensor with other analog and digital circuits required for an imaging system including digital signal processing functions like image stabilization, image compression, multi-resolution imaging, wireless control, color encoding, etc. An integrated solution allows us to reduce power consumption and improve the readout speed. More importantly, however, with a larger number of integrated circuits being manufactured in these processes, the average cost of individual chips made in these processes is significantly lower than that of CCDs


The CMOS image sensor has been the revolutionary technology for smartphones that through smaller pixels, higher sensitivity and lower noise at a decreased cost has resulted in cameras that produce fast and high-quality images. CMOS cameras, therefore have been rapidly gaining in popularity because of their ability to provide improved performance in speed, field of view, dynamic range, and image quality.


The enhanced image sensors have also benefitted machine vision, medical imaging and security applications. Therefore, instead of merely capturing an image for viewing by the human eye, CIS technology is now capturing data to power a host of new use cases from autonomous vehicles and virtual reality (VR) to next-generation medical imaging and high-tech surveillance systems.

CIS Technology

However, these sensors suffered from higher noise levels, which limited their ability to produce good quality images. Nevertheless, the feature size of CMOS processes has consistently reduced over the years following the empirical Moore’s law. This means that these processes offer the potential of making smaller pixels and thereby more pixels per chip. In addition, processes with lower feature size of transistors also led to lower power consumption of a typical transistor made in these processes.
Another method to improve an image quality is to increase the spatial resolution in the imager. The effective resolution of
an image sensor is calculated from test charts and Modulation Transfer Function (MTF) test results. However, it is a common practice to define it in terms of number of available pixels in the sensor. The number of pixels in an image sensor has constantly increased over the years from few thousand pixels to several millions of pixels per chip.
The pixel architectures, i.e., APS, usually consume significantly less power, which is a hundred times less than CCD sensors. In this type of sensor, an amplifier is incorporated into the pixel to increase the pixel performance. This parameter makes CMOS image sensors build compact applications that depend on batteries such as cell phones, laptops, etc.


Due to faster frame rates, CMOS APS architectures have been selective imager component in machine vision and motion estimation applications compared to PPS CMOS and CCDs. In a DPS (digital-pixel sensor) device, each pixel has its own analog-to-digital converter and memory block. Hence, the pixels in a DPS architecture output digital values proportional to light intensity.


A drawback to a CMOS image sensor is that there are several active devices in the readout path that can produce time-varying noise. Besides, fabrication inconsistencies can lead to mismatch between charge-to-voltage amplifiers of different pixels. This results in fixed-pattern noise, where different pixels produce different values even though they are exposed to uniform illumination.


The leading market differentiator in digital imaging till date has been the number of pixels in a camera. Despite
some saturation in the market, the pixel count is still one of fundamental sales pitches for any image sensor. In order
to further increase CIS resolution without excessively com-promising pixels’ charge recording capacity (or Full Well
Capacity – FWC), pixel sharing is becoming a increasingly popular design. Pixel sharing aims to share parts of the pixels
that can be used by other pixels in the vicinity, increasing the detector area and, therefore, the pixel’s charge capturing
capacity. This is particularly suitable for use with 4-T APS pixels, where the reset, the source follower and the row
selection transistor can be shared while the TX gate is used to isolate the photo-diode of each pixel. Existing sensors present 2.5T, 1.75T and 1.5T pixel sharing configurations, where 2, 4 and 6 adjacent pixels share the common pixel transistors.


Rolling Shutter Artifacts

With many CMOS image sensors, the exposure cycle of different pixel rows start at slightly different times. Typically, the rows are reset in sequence from top to the bottom. After the integration time of a given row passes by, its readout should start. Hence, light integration happens in sequence from top to the bottom just like the reset process. This can cause a kind of distortion called rolling shutter artifact when capturing a fast-moving object. This is due to the fact that a scene with a fast-moving object can change by the time all of the pixels are captured. Rolling shutter artifact manifests itself as some non-rigidity or bending in the captured scene. This is illustrated in the following figure.


Different types of CMOS image sensors have evolved to overcome the drawbacks and meet the application requirements. Some of the types of CMOS image sensors are backside illuminated CIS, logarithmic CIS, high-speed CIS, global shutter CIS, smart CIS, full well capacity CIS, ion image sensor CIS, neural network CIS, pH CIS, low noise CIS etc.


Researchers are further integrating more and more technologies in the image sensors including gigapixel sensors, 3D imaging, Quanta Image Sensor, that also improve low light performance and remove noise. A new imaging technology has been developed that can capture and count single photons with resolution as high as one megapixel and as fast as thousands of frames per second. Called the Quanta Image Sensor, or QIS, this technology enables highly sensitive, high-quality, easy-to-manipulate digital imaging as well as computer vision and 3D sensing, even in low-light situations.


New Camera sensor technologies

The FPA is a two-dimensional (2-D) array of photodetectors (or pixels) fabricated on an electro-optical material. Modern digital cameras contain FPAs that have pixel counts on the order of megapixels. For example, cameras with 2 megapixels are becoming obsolete, cameras with 5 megapixels are in decline but still a good value, and cameras with 10 megapixels are in the mainstream.


“Global Shutter” (GS) is a technical term referring to sensors that can scan the entire area of an image simultaneously in contrast to earlier sequential scanning “Rolling Shutter” (RS). In a GS sensor, the image is captured simultaneously using all pixels. The GS architecture includes a memory structure and additional MOS transistors to provide additional functionality. Today, most CIS imaging devices have adopted GS mode to avoid distortion and artifacts such as parasitic light sensitivity. CMOS image sensors using GS functionality are used in a variety of areas, including broadcasting, automotive, drone and surveillance applications.


BSI, 3D Stacked BSI, 3D Hybrid as well as 3D Sequential Integration are all key technologies that will affect future CIS technology adoption. BackSide Illumination (BSI) technology, is a promising alternative to the commonly used FrontSide Illumination (FSI) technology.


BSI technology involves turning the image sensor upside down and applying color filters and micro lenses to the backside of the pixels, so that the sensor can collect light through the backside. BSI has a deep photo-diode and short optical path, leading to a higher Quantum Efficiency (the percentage of photons that are converted into electrons) (QE) and Lower Crosstalk (diffusion of electrical charge,electrons or holes depending upon the pixel type, between adjacent pixels).


Advanced chip-stacking, featuring a BSI CIS wafer joined with an image signal processor (ISP) wafer. The motivation to invest in stacked chip CIS development has been somewhat varied depending upon the manufacturer, but can be summarized as: adding functionality, decreasing form factor, enabling flexible manufacturing options and facilitating optimization for each die in a 3D stack.


The development of curved image sensors may be the biggest advance in camera technology in decades, allowing for simpler, flatter lenses with larger apertures as well as dramatically better image quality.  NIKON, Sony and Canon are reported to be in race to develop and market curved sensor camera that operates using lens designs with fewer elements, less weight, less light loss, less internal reflection, less distortion and less aberration, all at lower cost.

Image sensors in the automotive industry

As more and more safety features are built into new vehicles, the automotive industry has become one of the fastest growing new applications for CMOS image sensors. Sensors have already appeared on cars, predominantly in rear-view visibility systems, advanced driver assistance systems (ADAS) for collision detection, blind spot monitoring and lane change assistance.  It is predicted that within 10 years it will be common for vehicles to be equipped with 20 or more image sensors.


The breadth of applications is large—forward-looking vision for detection of obstacles such as pedestrians or animals on the road, range finding for navigation, in-cockpit monitoring of an operator’s vital signs to ensure that they are not drowsy or distracted, etc. Infrared imaging and 3D vision are each receiving significant attention and investment in anticipation of demand from the automotive sector. Another fast growing application of 3D imaging is the gesture control for gaming and for user interfaces.

3D Imaging technologies

Automotive applications require 3D capability for imaging on short (sub-meter), medium (a few meters), and long (up to few hundred meters) length scales. Many three-dimensional technologies are being used to deliver 3D vision applications across various industries.


The three prominent ones are time-of-flight (ToF), which is essentially radar with light, where a pulsed or time-modulated light source is used to measure distance based on transit time and speed of light; binocular vision, which enables 3D vision via two sensors at a set distance apart on the camera; and structured light, which is a 3D technology that projects a pattern onto an object and then detects spatial shifts to determine how far away the object is.


Advanced CMOS Image Sensor Chips for Lidar

Specialty foundry TowerJazz and Newsight Imaging have announced production of Newsight’s advanced CMOS image sensor (CIS) chips and camera modules, customized for very high-volume lidar and machine vision markets, combining sensors, digital algorithms and pixel array on the same chip. Newsight’s CIS chips are used in ADAS (advanced driver assistance systems) and autonomous vehicles as well as in drones and robotics.


Newsight’s image sensor chips are designed for high-volume, competitive applications requiring cost effectiveness, low power consumption, high performance, and analog and digital integration. The NSI3000 sensor family, currently in mass production at TowerJazz’s Migdal Haemek, Israel, facility offers extremely high sensitivity pixels, enabling the replacement of expensive CCD (charge-coupled device) sensors in many applications and is designed for programmable high-frame-rate speeds, allowing better analysis and reaction to events.


In addition, Newsight’s NSI5000, currently in development with TowerJazz at its fab in Israel, is an integrated lidar solution for long-range applications and includes a DSP (digital signal processor) controller that enables complex calculations for depth and machine vision. NSI5000 is used in 3D-pulsed lidars for automotive applications and is based on Newsight’s eTOF (enhanced time-of-flight), which bridges the gap between short-distance iTOF (indirect time-of-flight) and the long-distance automotive requirement by extending the dynamic range while retaining high accuracy.



In spectroscopy, a sample is shined with a broadband light source and then the light coming out of the sample is collected and analyzed for its wavelength content. By comparing the source light to the collected light the molecular content of the sample can be analysed. Each of material has a spectral signature formed by the peaks and valleys in a material’s absorption and reflection spectra. Spectroscopy required  bulky and expensive sensing equipment that have restricted its use to scientific applications. For this technology to be adopted more widely, spectrometers must be developed using smaller, less costly CMOS sensors.


FCO researchers have now overcome this obstacle, showing for the first time the monolithic integration of a CMOS integrated circuit with graphene, resulting in a high-resolution image sensor consisting of hundreds of thousands of photodetectors based on graphene and quantum dots (QD). They incorporated it into a digital camera that is highly sensitive to UV, visible and infrared light simultaneously.


One of the more commonly quoted uses in high volume applications is the detection of foreign or dangerous materials—for example, a hyperspectral imager mounted in a cell phone could alert diners to the presence of harmful bacteria or allergens. And such sensors mounted in rooms or automobiles could alert occupants to the presence of dangerous gases. Spectral imaging could also enable the detection of counterfeit products by purposely embedding markers with specific spectral signatures in product—otherwise invisible to the human eye, these markers would be instantly recognizable to a spectral imager.



New photography approach gives traditional cameras ultra-high imaging speeds

Researchers from the Institut National de la Recherche Scientifique (INRS) in Canada describe their new method, called compressed optical-streaking ultra-high-speed photography (COSUP), in The Optical Society (OSA) journal Optics Letters. They show the power of COSUP by using it to capture the transmission of a single laser pulse with a width of just 10 microseconds.


“COSUP has a wide range of potential applications because it can be integrated into many imaging instruments from microscopes to telescopes,” explained Jinyang Liang, an assistant professor at INRS and the corresponding author for the paper. “Using different CCD and CMOS cameras with COSUP also allows the method to be used for a wide range of wavelengths and for acquiring various optical characteristics such as polarization.”


The researchers say that the COSUP system might also be useful to the movie industry and sports videography, where high-speed cameras are used to capture detailed, quick movements for playback in slow motion. They are also working to miniaturize the system to allow high-quality slow-motion video capture with a smartphone.


Although today’s cameras are very sensitive and can be used with a wide range of wavelengths, their speed is typically limited because of the imaging sensor. Specialty high-speed cameras come with limiting trade-offs such as recording only a few frames at high speeds, one-dimensional imaging, low resolution, or a bulky and expensive setup. The researchers developed COSUP to work around these challenges by combining a computational approach called compressed sensing with an imaging method called optical streak imaging.


“COSUP has specifications similar to existing high-speed cameras with an imaging speed that is tunable from tens of thousands of frames per second to 1.5 million frames per second,” said Liang. “We used off-the-shelf components to create a very economical system.” To perform COSUP, compressed sensing is used to spatially encode each temporal frame of a scene using a digital micromirror device, or DMD. This process labels the capture time of each frame much like a unique barcode. Then a scanner is used to perform temporal shearing, creating an optical streak image — a linear image from which the temporal properties of light can be inferred — that is captured with a traditional camera in a single shot.


“Even though the streak image contains a mixture of 2D space and time information, we can separate the data using reconstruction because of the unique labels attached to each temporal frame,” said Xianglei Liu, a doctoral student at INRS and the lead author of the paper. “This gives COSUP a 2D imaging field of view that can record hundreds of frames in each movie at 1.5 million frames per second and a resolution of 500 × 1000 pixels.”


The researchers demonstrated COSUP by imaging  short-lived event with a CMOS camera. In the  experiment, they fired four laser pulses, each with a pulse width of 300 microseconds, through a mask with the letters USAF. Using COSUP with an imaging speed of 60,000 frames per second they were able to record this event with 240 frames. By increasing the imaging speed to 1.5 million frames per second, they recorded a single 10-microsecond laser pulse transmitting through the USAF mask.


The researchers are also working to make the bench-top system compact enough to use outside and eventually for incorporation into smartphones. They have initiated an industrial collaboration with Axis Photonique to further develop COSUP toward a commercial product.


Quanta Image Sensor (QIS)

Gigajot Technology, inventors of the Quanta Image Sensor (QIS),  announced in May 2021 the first QIS products, marking the dawn of a new era in solid-state imaging. The CMOS-based QIS devices utilize Gigajot’s patented sensor architecture and pixel design to achieve record low noise that enables accurate detection of individual photons of light.


The new QIS products are capable of photon counting at room temperature while operating at full speed, and achieving high dynamic range – all in small pixel, high resolution formats. With 5-10x read noise improvement over conventional small pixel image sensors, QIS enables imaging at ultra-low light levels not previously possible.


Gigajot’s pioneering QIS products target high performance imaging applications such as scientific, medical, defense, industrial, and space. The 16-megapixel GJ01611 utilizes a 1.1-micron pixel to achieve room temperature 0.19 electron read noise and less than 0.09 electron/second/pixel dark current, while the 4-megapixel GJ00422 employs a 2.2-micron pixel and provides 0.27 electron read noise with single-exposure high dynamic range of 100 dB. The device was implemented in a commercial stacked (3D) backside-illuminated CMOS image sensor process.


In the QIS, the goal is to count every photon that strikes the image sensor, and to provide resolution of 1 billion or more specialized photoelements (called jots) per sensor, and to read out jot bit planes hundreds or thousands of times per second resulting in terabits/sec of data. The work involves design of deep-submicron jot devices, low-noise high-speed readout electronics, and novel ways of forming images from sequential jot bit planes at both the modeling and the simulation level and the characterization of actual devices and circuits.


Their new sensor has the capability to significantly enhance low-light sensitivity. This is particularly important in applications such as “security cameras, astronomy, or life science imaging (like seeing how cells react under a microscope), where there’s only just a few photons,” says Fossum. Second, we are investigating the use of image sensors in medicine and the life sciences. Photon-counting X-ray image sensors are being explored with a major medical equipment company. Application of our photon-counting QIS technology to low light fluorescence lifetime imaging microscopy (FLIM) is also being explored. Third, we are looking at innovative design and applications of CMOS image sensors to improving photography and scientific and industrial imaging, including low light and high speed applications.


Photon counting and reliable photon number resolving, until now, only partially available utilizing esoteric EMCCD technology in highly controlled laboratory environments, is now possible with a compact form-factor camera, operating at room temperature – with the additional benefits of higher resolution and speed. “The ability to do photon counting at room temperature is a game-changer for our research efforts in Astrophysics and Quantum Information Science,” said Dr. Don Figer, Director of Center for Detectors and the Future Photon Initiative in the College of Science, Rochester Institute of Technology.


ISS (Intelligent Surveillance Systems) CIS Applications

Surveillance systems are being part of human lives for safety and security purposes to avoid thefts, attacks and help police departments catch culprits or burglars. However, the cameras cannot be placed in restrooms due to privacy issues, and due to this reason, old age people falling accidents cannot be monitored. Most people feel discomfort about video recording cameras in open public places, so privacy preservation policies are recently requested by people. It is hard to identify the differences between regular cameras and privacy preservation cameras. Nakashima et al. developed a privacy preservation sensor for person detection to identify the person’s state and position without capturing and images.


A privacy preservation sensor can detect the person’s position by differentiating the background brightness and object brightness in a one-dimensional manner. This sensor can detect the person fallen or standing by keeping it in a vertical position and can identify the person’s position by keeping it in a horizontal direction.


Yan et al.  developed an uncovered CMOS camera that can detect nuclear radiation while working in surveillance mode, as shown in Figure 13a. Videos are recorded using this camera with a volunteer person wandering around the room under the camera’s view. Bright blotches were detected due to the radiation particles exciting the eletrons and can be seen as blotches in Figure 13b,c. Whenever the electrons are excited due to the radiation particles and CMOS sensor visible light, bright blotches are captured by using the CMOS camera.


Military Applications

Recent technology advances in CMOS image sensors (CIS) enable their utilization in the most demanding of surveillance fields, especially visual surveillance and intrusion detection in intelligent surveillance systems, aerial surveillance in war zones, Earth environmental surveillance by satellites in space monitoring, agricultural monitoring using wireless sensor networks and internet of things and driver assistance in automotive fields.


SiOnyx, LLC, an innovator in advanced CMOS image sensors and high performance camera systems,  announced a $19.9M award for the delivery of digital night vision cameras for the IVAS (Integrated Visual Augmentation System) program in Jan 2019. Under the award, SiOnyx is required to deliver low-light camera modules within two years for prototyping of low-light and night vision capabilities to the IVAS system.


The Integrated Visual Augmentation System (IVAS) is a system designed to incorporate head, body, and weapon technologies on individual Soldiers. It is a single platform that Soldier/Marines can fight, rehearse, and train that provides increased lethality, mobility and situational awareness necessary to achieve overmatch against our current and future adversaries. This system includes a squad-level combat training capability, which is vital for the repeated iterations of training and rehearsals needed to ensure potential future battlefield success.


CMOS image sensors (CIS) Industry

Yole Développement (Yole) envisions a growing future for the CIS industry at 5.7% CAGR. This growth is showing tighter links to the growth of semiconductor in general. Mobile keeps the most important role for CIS. In parallel, security and automotive have emerged.


The “Global CMOS Image Sensors Market – by Technology, Image Processing Type, Application, and Region – Market Size, Demand Forecasts, Industry Trends and Updates (2018-2025)” report has been added to ResearchAndMarkets.com’s offering. This market is valued at USD 10.17 billion in 2018 and is estimated to grow at CAGR of 9.15% for the forecasted period reaching USD 18.77 billion by 2025.


The major driver for this market is the increasing use of electronics in the consumer market; it is believed that consumer electronics is expected to lead the CMOS image sensor market. These sensors provide better quality pictures compared to CCD hence making it more desirable by consumers. Most of the major suppliers of this are responding to the shift in what’s driving sales growth, companies are aiming to become the largest supplier’s by the end of 2020.


Yole notes that the mobile market is key for the CMOS image sensor (CIS) industry, as the introduction of dual and 3D cameras, which are changing the industry’s drivers from form factor and image quality to interactivity. Further, penetration into higher value-added markets such as automotive, security, and medical has contributed to overall growth, as has new applications including drone photography, biometric identification, and augmented reality.


In 2019, Sony is the largest CIS player, with a growing market share of 42%. Then after, Yole announces Samsung and Omnivision with 21% and 10% market share, respectively. At the 4th place, STMicroelectronics has emerged as a competitor in the NIR sensing space. Other vision companies include Basler, CMOSIS, Cognex, Fairchild Imaging, FLIR, New Imaging Technologies, odos imaging, Omron, Point Grey, Teledyne e2v, Teledyne DALSA, and Toshiba Teli, among others.


The automotive industry has the greatest potential, by far, with forecasts estimating that automotive applications will drive an increase of $1.83 billion in CMOS sensor revenue from 2015 to 2020 according to a presentation by Mentor Graphics with data from IC Insights. Smaller, adjacent markets like machine vision and medical imaging have taken advantage of this massive technology investment by re-using the same technologies.


References and Resources also include:




About Rajesh Uppal

Check Also

Overcoming Challenges in High-Power Fiber Lasers: Stimulated Brillouin Scattering (SBS) and Thermal Mode Instability (TMI)

Introduction High-power fiber lasers have revolutionized various industries, ranging from manufacturing and defense to aerospace …

error: Content is protected !!