Home / Technology / Manufacturing / Digital Camera design and manufacturing

Digital Camera design and manufacturing

A digital camera is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film.

 

Digital and digital movie cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit a controlled amount of light to the image, just as with film but the image pickup device is electronic rather than chemical.

 

Unlike film cameras, digital cameras do not have chemical agents (film) and sometimes lack a viewfinder, which is typically replaced by a liquid crystal display (LCD). At the core of a digital camera is a semiconductor device, such as a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS), which measures light intensity and colour (using different filters) transmitted through the camera’s lenses. When light strikes the individual light receptors, or pixels, on the semiconductor, an electric current is induced and is translated into binary digits for storage within another digital medium, such as flash memory (semiconductor devices that do not need power to retain memory). However, unlike film cameras, digital cameras can display images on a screen immediately after being recorded,

 

Digital cameras are now widely incorporated into mobile devices like smartphones with the same capabilities and features as dedicated cameras. While there are still dedicated digital cameras, many more are now incorporated into mobile devices like smartphones. High-end, high-definition dedicated cameras are still commonly used by professionals and those who desire to take higher-quality photographs. They are present everywhere and have become the principal image-capturing tool.

 

Digital cameras come in a wide range of sizes, prices and capabilities. In addition to general-purpose digital cameras, specialized cameras including multispectral imaging equipment and astrographs are used for scientific, military, medical, and other special purposes. Compact digital cameras typically contain a small sensor which trades-off picture quality for compactness and simplicity; images can usually only be stored using lossy compression (JPEG). Most have a built-in flash usually of low power, sufficient for nearby subjects.

 

The final quality of an image depends on all optical transformations in the chain of producing the image. Carl Zeiss, a German optician, points out that the weakest link in an optical chain determines the final image quality. In the case of the digital camera, a simple way to describe this concept is that the lens determines the maximum sharpness of the image while the image sensor determines the maximum resolution.

 

The resolution of a digital camera is often limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value that is read for that pixel. The two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power.

 

The number of pixels in the sensor determines the camera’s “pixel count”. In a typical sensor, the pixel count is the product of the number of rows and the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1,000,000 pixels, or 1 megapixel. Full-color image, for the RGB color model requires three intensity values for each pixel: one each for the red, green, and blue (other color models, when used, also require three or more values per pixel). A single sensor element cannot simultaneously record these three intensities, and so a color filter array (CFA) must be used to selectively filter a particular color for each pixel.

 

Some digital cameras can crop and stitch pictures and perform other elementary image editing. Cameras with a small sensor use a back-side-illuminated CMOS (BSI-CMOS) sensor. The image processing capabilities of the camera determine the outcome of the final image quality much more than the sensor type

 

Pixel Formats

There are many pixel formats. Some of the simplest pixel formats include monochrome and RGB. In monochromes images, each pixel is stored as 8 bits, representing gray scale levels from 0 to 255, where 0 is black, 255 is white and the intermediate values are shades of gray. In the RGB color model, any color can be decomposed in Red, Green and Blue light at different intensities. Because a color can be made by mixing Red, Green and Blue, it is called the RGB color system or model. It is also called an “Additive” color system, because it starts at black, and then color is added. Using this model, each pixel must be stored as three intensities of these red, green and blue lights. The most common format is RGB888.

In this format each pixel is stored using 24 bits – the red, green and blue channels are stored in 8 bits each:

RRRRRRRRGGGGGGGGBBBBBBBB

For instance, the color red would be stored, in binary, as 24 bits as: 111111110000000000000000 or, as commonly shown in hexadecimal, as: FF0000

Image Compression

Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.

 

Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Some lossless compression methods are

Run-length encoding – used in default method in PCX and as one of possible in BMP, TGA, TIFF
Area image compression
Predictive coding – used in DPCM
Entropy encoding – the two most common entropy encoding techniques are arithmetic coding and Huffman coding
Adaptive dictionary algorithms such as LZW – used in GIF and TIFF

 

Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless.

Methods for lossy compression:

Transform coding – This is the most commonly used method.
Discrete Cosine Transform (DCT) – The most widely used form of lossy compression. It is a type of Fourier-related transform, and was originally developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. The DCT is sometimes referred to as “DCT-II” in the context of a family of discrete cosine transforms (see discrete cosine transform). It is generally the most efficient form of image compression. DCT is used in JPEG, the most popular lossy format, and the more recent HEIF.

JPEG (Joint Photographic Experts Group)
 Popular standard format for representing compressed digital images

 Provides for a number of different modes of operation
DCT (discrete cosine transform) Mode provides high compression ratios using
 Image data divided into blocks of 8 x 8 pixels
 3 steps performed on each block
DCT, Quantization and Huffman encoding

 

Requirements

• Initial specifications may be very general and come from the marketing department. e.g. Short document detailing market need for a low-end digital camera:
 Captures and stores at least 50 low-res images and uploads to PC

 Costs around $100 with single medium-size IC costing less that $25

 Has long as possible battery life
 Has expected sales volume of 200,000 if market entry < 6 months 100,000 if between 6 and 12 months
 insignificant sales beyond 12 months

 

Real Digital Camera has more features:  Variable size images, image deletion, digital stretching, zooming in/out, etc.

System’s requirements – what system should do

Functional Requirements, System’s behavior 

Captures images:

  • Digital recording and display of pictures
  • Stores images in digital format: Multiple images stored in camera. The Number depends on the amount of memory and bits used per image. Permanent saving of picture in file in a standard format at a flash-memory stick or card
  • Processing to get the pictures of required brightness, contrast and color.
  • Downloads images to Computer System (PC)
  • Transfer files to a computer and printer through a USB port

Functions of the system
 A color LCD dot matrix displays the picture before shooting─ enables manual adjustment of view of the picture.
 For shooting a shutter button pressed─ a charge-coupled device (CCD) array placed at the focus generates a byte stream in output after operations by ADC on analogoutput of each CCD cell.

Inputs: Intensity and color values for each picture horizontal and vertical rows and columns of pixels in a picture
frame.
Intensity and color values for the unexposed (dark) area in each horizontal rows and columns of pixels.
User control inputs

• Nonfunctional Requirements

Design metrics of importance based on initial specification
• Performance: time required to process image. Must process image fast enough to be useful
 1 sec reasonable constraint: Slower would be annoying and Faster not necessary for low-end of market. Therefore, constrained metric
• Size: number of logic gates (2-input NAND gate) in IC

 Must use IC that fits in reasonably sized camera
 Constrained and optimization metric: 200K gates, but lower is cheaper

• Power: a measure of avg. power consumed while processing

 Must operate below certain temperature (no-cooling fan) a constrained metricEnergy
 Reducing power or time reduces energy
 Optimized metric: want battery to last as long as possible

• Energy: battery lifetime (power x time)

Constrained metrics
• Values must be below (sometimes above) certain threshold (e.g. “should use 0.001 watt or less”)

Optimization metrics
• Improved as much as possible to improve product. Metric can be both constrained and optimization

 

Outputs
 Encoded file for a picture
 Permanent store of the picture at a file on flash memory stick
 Screen display of picture from the file after decoding
 File output to an interfaced computer and printer.

 

 

Design Challenges

Optimizing Design Metrics
 Construct an implementation with the desired functionality

Key Design Challenge: Simultaneously optimize numerous design metrics. Improving one may worsen the others

• Design Metric
 A measurable feature of a system’s implementation
 Optimizing design metrics is a key challenge

Common Design Metrics

• Unit cost: The monetary cost of manufacturing each copy of the system, excluding NRE cost
• NRE cost (Non-Recurring Engineering cost): The onetime monetary cost of designing the system
• Size: the physical space required by the system

Performance: the execution time or throughput of the system

Power: the amount of power consumed by the system

Flexibility: the ability to change the functionality of the system without incurring heavy NRE cost

Time-to-prototype: the time needed to build a working version of the system
• Time-to-market: the time required to develop a system to the point that it can be released and sold to
customers. • Average time-to-market constraint is about 8 months. Market window:  Period during which the product would have highest sales. Delays can be costly

• Maintainability: the ability to modify the system after its initial release
• Correctness, safety, many more

 

Two key Tasks
• Processing images and storing in memory

When shutter pressed:
o Image captured
o Converted to digital form by charge-coupled device (CCD)
o Compressed and archived in internal memory
• Uploading images to PC
 Digital camera attached to PC
 Special software commands camera to transmit archived images serially

 

 

Digital Camera Design

The Block diagram of a common imaging system. Such a system includes the optical lenses, the color filter array (CFA) and the image sensor (CCD, CMOS). It also includes the main control systems, as automatic gain control (AGC), analog to digital converter (ADC), auto focus and auto exposure circuitry. The digital signal path is completed with color and digital image processing, and is finally sent to the baseband for storage, or to the interface for visualization. The bilateral filter is a non-linear filter well suited for denoising applications. It exhibits demonstrated effectiveness properties and its formulation simplicity contributes to its popularity.

General block diagram of a digital camera. | Download Scientific Diagram

Most consumer digital cameras use a Bayer filter mosaic in combination with an optical anti-aliasing filter to reduce the aliasing due to the reduced sampling of the different primary-color images. A demosaicing algorithm is used to interpolate color information to create a full array of RGB image data. The Bayer filter pattern is a repeating 2×2 mosaic pattern of light filters, with green ones at opposite corners and red and blue in the other two positions. The high proportion of green takes advantage of properties of the human visual system, which determines brightness mostly from green and is far more sensitive to brightness than to hue or saturation.

 

Signals, Events and Notifications
 User commands given as signals from
switches/buttons

 

 

 

References and Resources also include:

https://www.ee.ryerson.ca/~courses/ee8205/lectures/digital-camera-casestudy.pdf

 

 

About Rajesh Uppal

Check Also

Advancing Lunar Infrastructure: DARPA’s LunA-10 Program

Background and Vision In the year 2035, the Moon is more than just a celestial …

error: Content is protected !!