Remote sensing of the environment, and especially Earth observation, is witnessing an explosion in terms of volume of available observations, which offer unprecedented capabilities towards the global-scale monitoring of natural and artificial processes.
Satellite imaging payloads, mostly operate a store-and-forward mechanism, whereby the captured images are stored on board and transmitted to ground later on. With the increase of spatial resolution and swath, space missions are faced with the necessity of handling an extensive amount of imaging data. However, the increase in volume, variety and complexity of measurements has led to a situation where data analysis is causing a bottleneck in the observation-to-knowledge pipeline.
Satellite imagery can be of many forms such as Panchromatic which is usually black-and-white ones for e.g images taken by CORONA satellite, launched by the United States National Reconnaissance Office in the 1960s, Multispectral images are those that have have recording colours beyond the RGB spectrum for e.g“GeoEye-1 which is four-band (RGBN) multi-spectral imagery and Landsat has up to 8 bands, Hyper-spectral images have hundreds of very narrow bands that covers the continuous spectrum of light, rather than discrete bands.
Space-borne imaging sensors generate tremendous volumes of data at very high rates, and it requires more bandwidth for transmission and more memory for storage, however storage capacity and communication bandwidth are expensive satellite resources. By compressing the images as they are acquired, better use is made of available storage and bandwidth capacity.
Image compression is the process of creating an encoded, compact representation of the image without loss of significant information. Compression reduces the size of the image by employing minimum number of bits to encode redundant information thereby enabling the faster and more efficient transmission of data.
Image compression compensates for the limited on-board resources, in terms of mass memory and downlink bandwidth and thus provides a solution to the “bandwidth vs. data volume” dilemma of modern spacecraft. Therefore compression is becoming a very important feature in payload image processing units of many satellites.
Digital images generally contain a significant amount of redundancy, thus image compression techniques take advantage of these redundancies to reduce the number of bits required to represent the image. There are two main kinds of data redundancy on digital images: spatial redundancy and coding redundancy
There are several types of redundancy in an image, such as spatial redundancy, statistical redundancy, and human vision redundancy. Basically, removing these types of redundancy is how the process of compression is achieved.
(a) Spatial Redundancy Means that, due to the interpixel correlations within the image, the value of any pixel can be partially predicted from the value of its neighbors. To reduce the spatial redundancy, the image is usually modified into a more efficient format using spatial decorrelation methods, such as prediction or transforms
Spatial decorrelation methods, like prediction or transformation, are usually employed to remove the spatial redundancy. Prediction is used to predict the current pixel value from neighbouring pixels. For example the differential pulse code modulation (DPCM) method is a typical prediction based technique. Transformation is used to transform the image from the spatial domain into another domain, applying, for example, the discrete cosine transform (DCT) or the discrete wavelet transform (DWT).
(b) Statistical redundancy explores the probability of symbols. The basic idea is to assign short codewords to high-probability symbols, and long codewords to low-probability symbols. Huffman or arithmetic coding are two popular methods to remove statistical redundancy; they are usually called entropy coding.
(c) Human vision redundancy, when dealing with lossy compression, explores the fact that eyes are not so sensitive to high frequency. Removing human vision redundancy is normally achieved by quantization, with high-frequency elements being over quantized or even deleted
Lossy and Lossless Compression
Two types of compression techniques are lossy and lossless compression. In lossy compression, an approximate of the original image can be reconstructed. It is often accompanied by loss of data. In lossless compression, the original image can be reconstructed without data loss.
Lossless compression Techniques
In the lossless compression, the compressed image is same as that of the original input image, without negligible loss of information. In this, image is first divided into smaller components in form of pixels. Compression process is applied to each pixel.
For remote sensing and scientific satellite missions, lossless data compression is often preferred as any loss of data reduces the usability significantly. There are many lossless compression algorithms like Huffman coding, Arithmetic coding, Lempel-Ziv coding. However, for space applications, algorithms which can reduce the on-board memory requirements and station contact time are preferred
This process is completed into two stages: in first stage, intensity value of each pixel is predicted based on neighbourhood intensity values. In the second stage, the difference between the new calculated value and the actual value of the next pixel is coded using different encoding methods.
Techniques In lossy compression, the reconstructed image is not same as the input image, there may be some amount of loss present in the new image. It provides higher compression ratio as compared to lossless compression.
In spatial domain methods, only the spatial features of the image are taken into account and processed further. It includes vector quantization and Block Truncation coding (BTC). Fractal coding is also a lossy compression method which works on a fractal dimension. Frequency domain techniques transforms the image completely to the frequency domain instead using a spatial domain. Because computation is much easier with the frequency
Transformation can be done by means of the various transforms such as Fourier transform, singular value decomposition based method, KL transform, Discrete Cosine Transform (DCT) and Wavelet Transform
Various Lossless compression techniques such as DCT which is based on Transform Coding is effective as transform coefficients are obtained for each smaller block of an image, Block Truncation Encoding includes finding mean of non-overlapping blocks on an image, followed by thresholding, Vector quantisation (VQ) technique is the extension of Scalar quantisation in multiple dimensions which forms code vectors and non-overlapping image vectors.
Image Compression Systems
Generally, a compression system model consists of two distinct structural blocks: an encoder and a decoder. The encoder creates a codestream from the original input data. After transmission over the channel, the decoder generates a reconstructed output data.
A typical model of an encoder consists of three functional modules, a prediction module (for prediction-based compression systems) or a forward transform module (for transform-based compression systems) that performs the spatial decorrelation, a quantization module that reduces the dynamic range of the errors, and an entropy encoding module that reduces the coding redundancy. When lossless compression is desired, the quantization step is omitted because it is an irreversible operation.
Basically, the decoder consists of two functional modules: an entropy decoding and an inverse prediction or transform. The quantization step results in irreversible information loss, and reconstruction of quantized data is based on the midpoints of each quantization interval.
ESA to test Dotphoton’s technology for use in space
The Zug based startup Dotphoton, developer of a raw image data compression technology has signed a commercial agreement with the European Space Agency’s General Support Technology Programme. ESA will test Dotphoton’s technology for use in space applications.
Dotphoton developed a software solution for raw image data compression for critical applications with guaranteed raw image quality and fidelity preservation thanks to insights from quantum physics. The company’s core technology has been validated by major Swiss research centres and is already being used by biomedical camera manufacturers and laboratories
Expanding its scope into the space field, Dotphoton has signed a commercial agreement with the European Space Agency’s (ESA) GSTP Technology Programme with the coordination of the Swiss Space Office, to validate the integration of Dotphoton’s compression technology for space applications. The output data generated from Dotphoton is suitable for the latest generation of machine vision and AI image processing algorithms, which is particularly important in the context of the permanently growing space image data. The products allow saving a factor of 5–10 in storage space and network bandwidth, as well as the associated time, power and costs.
The GSTP supports technology developments for ESA’s future missions, ensuring that the right technology is available at the right maturity at the right time. Therefore, this project will help to bring Dtophoton’s software solution to a higher maturity level and to efficiently explore two possible paths towards a performant hardware implementation, namely based on Vision Processing Unit (VPU) or Field Programmable Gate Array (FPGA) technology.
The final product, which will be co-developed by Dotphoton and ESA, could help ESA’s optical missions to reduce costs in the processing of raw image data and in satellite-to-ground image transmission, one of the important cost drivers for all modern missions.