Remote sensing is the science of acquiring information about the Earth’s surface without actually being in contact with it. This is done by sensing and recording reflected or emitted energy and processing, analyzing, and applying that information. Some examples of remote sensing are special cameras on satellites and airplanes taking images of large areas on the Earth’s surface or making images of temperature changes in the oceans.
Remote sensed data might contain noise and other deficiencies derived from the sensors onboard or radiative transfer processes. Therefore, we often conduct further preprocessing techniques to deal with such flaws. These different processing techniques are generally referred to as Image Processing.
Image processing is an umbrella term for many functions that perform some operations on an image, in order to get an enhanced image or analyze images to extract some useful information from it. It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image
In the past the development of digital image processing was limited because of the cost of processing was very high because the imaging sensors and computational equipment were very expensive. As optics, imaging sensors, and computational technology advanced, image processing has become more commonly used in many different areas.
Although certain kinds of analog processing were performed in the past, today image processing is done in the digital domain. Its main components are importing, in which an image is captured through scanning or digital photography; analysis and manipulation of the image, accomplished using various specialized software applications; and output (e.g., to a printer or monitor). A digital image processing system is the combination of the computer hardware and the image processing software
Some areas of application of digital image processing include image enhancement for better human perception, image compression and transmission, as well as image representation for automatic machine perception.
Some of the important applications of image processing in the field of science and technology include medical imaging, machine vision, robotics, computer-generated imagery (CGI), face detection, optical character recognition, finger-print detection, surveillance, videoconferencing and satellite data analysis.
Drone aircrafts monitoring environmental and traffic conditions can use image processing to capture high resolution real-time videos and photographs. In case of natural or other disasters like flood, earthquake, fire etc, knowing which disaster-struck areas the authorities need to focus upon can help save lives by reaching quickly to those trapped and bring them out safely. Even monitoring the progress and ensuring coordination during such rescue operations can be made easier with real-time image processing techniques.
Military Applications and Requirements
For Defense and Security, digital image processing has been widely deployed for applications such as small target detection and tracking, missile guidance, vehicle navigation, wide area surveillance, and automatic/aided target recognition.
One goal for an image processing approach in defense and security applications is to reduce the workload of human analysts in order to cope with the ever increasing volume of image data that is being collected. A second, more challenging goal for image processing researchers is to develop algorithms and approaches that will significantly aid the development of fully autonomous systems capable of decisions and actions based on all sensor inputs.
Image processing is a technology containing a wide range of applications, which could be target tracking by object recognition from a high-definition camera on a surveillance vehicle. The products that provide image processing are just as diverse as the applications which require it.
Image processing is playing an increasingly important role in defence systems because of many resons such as the need for autonomous operation and the need to make greater use of the outputs from a diverse range of sophisticated sensors. The data is becoming a powerful tool in meeting the needs of the world in resource exploitation and management. New analysis methods are being developed to take advantage of the new types of data.
At the tactical level even sensing of the enemy minefields may be done by satellites. On the strategic level, verification of the arms control agreements strongly depends on image processing to identify and count missile silos from reconnaissance images.
The Autonomous Vehicle is one of the most important applications of image processing . The vehicle contain a small modular computer control system. Vision modules will be included that provide basic scene processing and object recognition capabilities. To develop an autonomous land vehicle with the capabilities envisaged, requires an expert system for navigation as well as a sophisticated vision system.
The expert navigation system must plan routes using digital terrain and environmental data, devise strategies for avoiding unanticipated obstacles, estimate the vehicle’s position from other data, update the on-board digital terrain data base, generate moment-to-moment steering and speed commands, and monitor vehicle performance and on-board systems. All these functions must be accomplished in real-tifie to near-real-time while the vehicle is moving at speeds upto 60 kmlh.
The vision system must take in data from imaging sensors and interpret these data in real-time to produce a symbolic description of the vehicles environment. ‘It must recognise roads and road boundaries; select, locate and dimension fix moving obstacles in the roadway; detect, locate and classify objects in open or forested terrain; locate and identify man-made and natural landmarks; and produce thematic maps of the local environment; while moving at speeds upto 60 kmlh.
Targetting, surveillance, command and control activities all need rapidly to make sense out of large amount of desparate and possibly unreliable information. This is bound to require the application of advanced Artificial Intelligence (AI) methods, especially those relating to image understanding. Subsequent control may be passed to an autonomous system which will attempt to select an appropriate target from captured image data set, and initiate an appropriate response.
With more and more data available to the drivers of our military vehicles, the need to display this data concisely and quickly grows every day. As resolution of displays increases, the ability to display more and more data clearly on a single screen also increases. This data might be external camera imaging, vehicle information, targeting information or guidance information.
Of equal importance with these required computing capabilities is the weight, space and power required. For a land reconnaissance vehicle, for example, the computers should occupy no more than about a cubic meter, should weigh less than 250 kg, and should consume less than 1 kW of power, including environmental support. For aerospace and undersea autonomous vehicles, the constraints and requirements will be tighter and need to include the capability to operate in high radiation environments.
Image processing techniques
There are several Image processing techniques used in Earth observation, and we tend to categorize them into four broad categories: Preprocessing, transformation, correction, and classification.
Some distortions need to be corrected before carrying out analysis and post-processing techniques. The image preparation and processing operations carried out before analysis to correct or minimize image distortions from, i.e., imaging systems, sensors, and observing conditions, are often referred to as pre-processing techniques.
Some typical pre-processing operations include but not limited to the following types:
- Radiometric Correction
- Atmospheric Corrections
- Geometric Correction
An ideal Earth observation system should be equipped with a perfect spectro-radiometer that can measure accurately and uniformly the amount of energy that is reflected by objects located on the Earth’s surface. Unfortunately, the sunlight that illuminates objects is perturbed by its passage through the atmosphere and does not hit all objects at the same angle. What is more, the light that is reflected by the objects must also cross the atmosphere before being analysed by the satellite’s sensors, and this journey also perturbs the signal.
These perturbations are due to the presence of gases and dust that can absorb and/or reflect specific wavelengths, thereby changing the radiation’s spectral properties. What is more, the electronic processing of the rays that reach the sensors is also accompanied by some perturbations. Consequently, it is actually rather difficult to get accurate radiometric values from the data recorded by Earth-observing satellites.
Radiometric corrections are classified into two broad categories: Sun Angle/Topography Radimetric Corrections and Sensor Irregularities Radiometric Corrections. The Sung Angle/Topography radiometric corrections correct the effects of diffusion of sunlight, especially in the water surface and mountains, by estimating the shading curve.
On the other hand, Sensor irregularity corrections involve removing radiometric noise from changes in sensor sensitivity or degradation of the sensor. The correction process under this category calculates new relationships between calibrated irradiance measurement and sensor output signal. Therefore, the process is also called Calibration.
Now, it is sometimes very useful to be able to calibrate these data precisely, for example to compare the data recorded by different satellites or recorded by the same satellite at different times, and several solutions do exist to try to overcome these flaws. Some of them are based on complex mathematical models that describe the main interactions involved. These models are effective. However, to apply them the values of certain parameters (i.e. the atmospheric composition) when and where the pictures are taken must be known, and this is seldom possible.
Other radiometric correction methods are based on the observation of reference targets whose radiometry is known. The surfaces of bodies of water, glacial ice caps, and expanses of desert sand are often used, but here too you can understand that actually making the corrections often is not that easy. In fact, the overwhelming majority of remote sensing research uses radiometrically uncorrected data.
Atmospheric correction methods also fall into two broad categories: The absolute Correction method and the Relative Correction method.
The absolute Correction Method considers several time-dependent parameters, including solar zenith angle, the total optical depth of the aerosol, Irradiance at the top of the atmosphere, and sensor viewing geometry to correct the atmospheric distortions.
However, absolute correction methods are so complex, and exact measurements of atmospheric conditions are challenging. We often use the Relative Correction Methods, which involves the normalization of multiple images collected on different dates in a given scene with a reference scene.
In Sentinel 2 data products, for example, Level-2A processing includes an atmospheric correction applied to Top-Of-Atmosphere (TOA) Level-1C orthoimage products.
The images acquired by Earth observation systems cannot be transferred to maps as is, because they are geometrically distorted. These distortions are due to errors in the satellite’s positioning on its orbit, the fact that the Earth is turning on its axis as the image is being recorded, the effects of relief, etc. They are amplified even more by the fact that some satellites take oblique images.
Some distortions, such as the effects of the Earth’s rotation and camera angles, are predictable. They thus can be calculated and correction values applied systematically. Satellites also have sophisticated on-board systems to record very slight movements affecting the satellite. This information is used mainly to correct the satellite’s position (when this is necessary), but can also be used to correct the images geometrically.
To improve the precision of the corrections, reference points or Ground Controle Points, GCP (identified on a topographical map or in the field by GPS) must be available. The SPOT images that have been corrected in this way are accurate to approximately 50m and the data that they contain may be presented on a given map projection, meaning that the images are superimposable on a map.
Image processing algorithms
The formation of an image; or its conversion from one form into another; or its transmission from one place to another often involves some degradation of image quality; with the result that the image then requires subsequent improvement, enhancement or restoration. Image restoration is commonly defined as the reconstruction or estimation of an image to correct for image degradation and approximate and ideal degradation-free image as closely as possible. Image enhancement involves operations that improve the appearance of an image to a human viewer, or convert an image to a format better suited to machine processing.
The basic distinction between enhancement and restoration is that with the former no attempt is made to establish the underlying degradation, while with the latter, specific known or estimated degradation processes are assumed. The former embraces such techniques as contrast modification, deblurring, smoothing and noise removal, while restoration tends to revolve around formalism of filter theory.
Most digital image display systems allow one to break each ‘primitive’ colour into 256 degrees of intensity. An image that uses this entire intensity scale, that is, that contains values coded from 0 to 255, has excellent contrast, for the colour range will extend from black to white and include fully-saturated colours. In contrast, an image that uses a narrow range of numerical values will lack contrast (it will look ‘greyish’). The contrast spread function is almost always applied before remote sensing images are analysed. This is because the Earth-observing satellites’ sensors are set to be able to record very different lighting conditions, ranging from deserts and ice floes (highly reflective areas) to Equatorial rainforests and oceans (very dark areas).
Image processing algorithms involve the repetition of some computations over large amounts of data. The process of moving a filter mask over the image and computing the sum of products at each location is generally called correlation.
Filtering is one of the main techniques used in the field of image processing. The principle of the various filters is to modify the numerical value of each pixel as a function of the neighbouring pixels’ values. For example, if the value of each pixel is replaced by the average of its value and those of its eight neighbours the image is smoothed, that is to say, the finer details disappear and the image appears fuzzier.
Filtering is used for enhancing and modifying the input image. With the help of different filters, you can emphasize or remove certain features in an image, reduce image noise, and so on. Popular filtering techniques include linear filtering, median filtering, and Wiener filtering.
Edge detection uses filters for image segmentation and data extraction. By detecting discontinuities in brightness, this method helps to find meaningful edges of objects in processed images. Canny edge detection, Sobel edge detection, and Roberts edge detection are among the most popular edge detection techniques.
Image processing Hardware
The three most common choices for image processing platforms in machine vision applications are the central processing unit (CPU), graphics processing unit (GPU), and field programmable gate array (FPGA). CPUs are the heart of traditional desktop computers and laptops. In phones or tablets, an ARM processor that draws less power serves the CPU function. CPUs have larger instruction sets and a large library of native computer languages like C, C++, Java, C#, and Python. Some of these languages have packages that can transfer functions to and run on a GPU.
With the rapid evolution of digital imaging technology, the computational load of image processing algorithms is growing due to increasing algorithm complexity and increasing image size. This scenario typically leads to choose a high-end microprocessor for image processing tasks. However, most image processing systems have quite strict requirements in other aspects, such as size, cost, power consumption and time-to-market, that cannot be easily satisfied by just selecting a more powerful microprocessor. Meeting all these requirements is becoming increasingly challenging.
In consumer products, image processing functions are usually implemented by specialized processors, such as Digital Signal Processors (DSPs) or Application Specific Standard Products (ASSPs). However, as image processing complexity increases, DSPs with a large number of parallel units are needed. Such powerful DSPs become expensive and their performance tends to lag behind image processing requirements. On the other hand, ASSPs are inflexible, expensive and time-consuming to develop.
Hardware acceleration is a suitable way to increase performance by using dedicated hardware architectures that perform parallel processing. GPUs have traditionally been used to render the pixels, i.e. the graphics, in video games on PCs. Laptop computers also usually have GPUs. The better the GPU, the better the graphics quality and higher the frame rates. A GPU performs the same function, but in reverse, for image processing applications. Instead of starting with the conditions within a video game and attempting to render them onto a screen with millions of pixels, in machine vision, millions of pixels are processed down to help software interpret and understand the images. Because they have an architecture composed of many parallel cores and optimized pixel math, GPUs very effectively process images and draw graphics.
With the advent of Field Programmable Gate Array (FPGA) technology, dedicated hardware architectures can be implemented with lower costs. Cost and time-to-market are also greatly reduced as manufacturing is avoided and substituted by field programming. Finally, power consumption can be reduced as the circuitry is optimized for the application.
The programmable circuits of FPGAs run custom programs downloaded to the card to configure them to accomplish the desired task at lower-level logic that requires less power than a CPU or GPU. An FPGA also does not require the overhead of an operating system. Modern FPGA devices currently include a large amount of general purpose logic resources, such as Look-Up Tables (LUTs) and registers. In addition, they commonly include an increasing number of specialized components such as embedded multipliers or embedded memories, which are very useful to implement digital signal processing functions.
Depending on the complexity of the application, image processors can require significant processing power and this can prove to be a challenge to deploy in the rugged environment of military vehicles. Curtiss-Wright has taken on this challenge and provided solutions ranging from 1,000W to 3,000W for several different deployed applications. Through the use of air-flow through technology, Curtiss-Wright has been able to cool as much as 200W in a single card slot, while not requiring any exotic cooling infrastructure from the vehicle. So regardless of the processing needs, Curtiss-Wright has a solution.
With the development of information technology and the rapid development of image data collection technology, various industries generate a large amount of multimedia data every day, and most of these data come from digital image data. Faced with the explosive growth of digital image data, traditional stand-alone image processing faces many problems, such as low processing speeds and poor concurrency. Therefore, the traditional image processing mode cannot evolve to meet the needs of users, and it is necessary to find a new effective image processing mode .
Cloud computing is an Internet-based computing model with wide participation, in which computing resources (computing, storage, and interaction) are dynamic, scalable, and virtualized services. Cloud computing is a form of distributed computing that is related to breaking down huge data-processing programs into countless small programs through the network “cloud” and then processing and analysing these small programs through a system containing multiple servers. The result is returned to the user. In short, in the early days of cloud computing, simple distributed computing was performed, the distribution problem was solved and the calculation results were combined, so cloud computing is also called grid computing. Thanks to this technology, thousands of data points can be processed in a short time (a few seconds) to provide powerful network services.
In cloud computing, the model method for optimizing massive image data can be accurate and reasonable. Cloud computing can extract effective and valuable information and make this information reliable and convincing. By using intelligent optimization algorithms to solve image data model problems with big data and cloud computing, the method of processing image data models in big data and cloud computing can provide new and complex massive data picture theory and assistance tools. The main importance of cloud computing is its consistency; that is, cloud computing is highly scalable, and it is needed to provide users with a whole new experience. The core of cloud computing is the coordination of many computer resources. Users can receive unlimited resources through the network, and resources received at the same time are not limited by geographical or temporal constraints. This simplifies the modification of image processing software code and facilitates various image processing methods.
Currently, cloud services are not only a form of distributed computing but also include hybrid computing and computing technologies such as distributed computing, service computing, load balancing, parallel computing, network storage, hot backup, and virtualization.
Shadow brightening image enhancement patent issued to Navy
Greg Fleizach, an electrical engineer at the Naval Information Warfare Center in San Diego, earned another patent for image correction technologies in Nov 2020. The Navy’s application for 20-year protection on Fleizach’s shadow brightening image enhancement was granted by the patent office, which numbered it U.S. Patent 10,846,834.
Fleizach’s invention helps correct images taken of natural scenes, he used a photograph of a backlit International Space Station orbiting the Earth in the patent, by brightening dark pixels in shadow with a gamma transformation map and leaving the rest of the image alone. The gamma transformation is scaled to prevent blooming and wash out in brighter areas.
“Instead of a single gamma value being applied to every pixel in the original input image, the present system and method let the user divide the image into two regions: shadowed and bright regions. Thereafter, the shadowed and bright regions may be handled separately so that the bright regions are not further brightened, but the shadowed regions are enhanced,” according to the new patent. Earlier in 2020, Fleizach’s work on bidirectional edge highlighting with strong edge attenuation earned the Navy U.S. Patent 10,636,127.
References and resources also include: