Digital Image Processing
Image Rectification and Registration
Geometric distortions manifest themselves as errors in the position of a pixel relative to other
pixels in the scene and with respect to their absolute position within some defined map
projection. If left uncorrected, these geometric distortions render any data extracted from the
image useless. This is particularly so if the information is to be compared to other data sets,
be it from another image or a GIS data set. Distortions occur for many reasons.
For instance distortions occur due to changes in platform attitude (roll, pitch and yaw),
altitude, earth rotation, earth curvature, panoramic distortion and detector delay. Most of
these distortions can be modelled mathematically and are removed before you buy an image.
Changes in attitude however can be difficult to account for mathematically and so a
procedure called image rectification is performed. Satellite systems are however
geometrically quite stable and geometric rectification is a simple procedure based on a
mapping transformation relating real ground coordinates, say in easting and northing, to
image line and pixel coordinates. Rectification is a process of geometrically correcting an
image so that it can be represented on a planar surface , conform to other images or conform
to a map (Fig. 3). That is, it is the process by which geometry of an image is made
planimetric. It is necessary when accurate area, distance and direction measurements are
required to be made from the imagery. It is achieved by transforming the data from one grid
system into another grid system using a geometric transformation.
1
Image Enhancement;
Image enhancement techniques improve the quality of an image as perceived by a human.
These techniques are most useful because many satellite images when examined on a colour
display give inadequate information for image interpretation. There is no conscious effort to
improve the fidelity of the image with regard to some ideal form of the image. There exists a
wide variety of techniques for improving image quality. The contrast stretch, density slicing,
edge enhancement, and spatial filtering are the more commonly used techniques. Image
enhancement is attempted after the image is corrected for geometric and radiometric
distortions. Image enhancement methods are applied separately to each band of a
multispectral image. Digital techniques have been found to be most satisfactory than the
photographic technique for image enhancement, because of the precision and wide variety of
digital processes. Contrast generally refers to the difference in luminance or grey level values
in an image and is an important characteristic. It can be defined as the ratio of the maximum
intensity to the minimum intensity over an image. Contrast ratio has a strong bearing on the
resolving power and detectability of an image. Larger this ratio, more easy it is to interpret
the image. Satellite images lack adequate contrast and require contrast improvement.
Contrast Enhancement Contrast enhancement techniques expand the range of brightness
values in an image so that the image can be efficiently displayed in a manner desired by the
analyst. The density values in a scene are literally pulled farther apart, that is, expanded over
a greater range. The effect is to increase the visual contrast between two areas of different
uniform densities. This enables the analyst to discriminate easily between areas initially
having a small difference in density
Linear Contrast Stretch This is the simplest contrast stretch algorithm. The grey values in
the original image and the modified image follow a linear relation in this algorithm. A density
number in the low range of the original histogram is assigned to extremely black and a value
at the high end is assigned to extremely white. The remaining pixel values are distributed
linearly between these extremes. The features or details that were obscure on the original
image will be clear in the contrast stretched image. Linear contrast stretch operation can be
represented graphically as shown in Fig. 4. To provide optimal contrast and colour variation
in colour composites the small range of grey values in each band is stretched to the full
brightness range of the output or display unit.
2
Non-Linear Contrast Enhancement In these methods, the input and output data values
follow a non-linear transformation. The general form of the non-linear contrast enhancement
is defined by y = f (x), where x is the input data value and y is the output data value. The non-
linear contrast enhancement techniques have been found to be useful for enhancing the colour
contrast between the nearly classes and subclasses of a main class. A type of non linear
contrast stretch involves scaling the input data logarithmically. This enhancement has greatest
impact on the brightness values found in the darker part of histogram. It could be reversed to
enhance values in brighter part of histogram by scaling the input data using an inverse log
function.
Histogram equalization is another non-linear contrast enhancement technique. In this
technique, histogram of the original image is redistributed to produce a uniform population
density. This is obtained by grouping certain adjacent grey values. Thus the number of grey
levels in the enhanced image is less than the number of grey levels in the original image.
SPATIAL FILTERING A characteristic of remotely sensed images is a parameter called
spatial frequency defined as number of changes in Brightness Value per unit distance for any
particular part of an image. If there are very few changes in Brightness Value once a given
area in an image, this is referred to as low frequency area. Conversely, if the Brightness
Value changes dramatically over short distances, this is an area of high frequency. Spatial
3
filtering is the process of dividing the image into its constituent spatial frequencies, and
selectively altering certain spatial frequencies to emphasize some image features. This
technique increases the analyst’s ability to discriminate detail. The three types of spatial
filters used in remote sensor data processing are : Low pass filters, Band pass filters and High
pass filters.
Edge Enhancement in the Spatial Domain For many remote sensing earth science
applications, the most valuable information that may be derived from an image is contained
in the edges surrounding various objects of interest. Edge enhancement delineates these edges
and makes the shapes and details comprising the image more conspicuous and perhaps easier
to analyze. Generally, what the eyes see as pictorial edges are simply sharp changes in
brightness value between two adjacent pixels. The edges may be enhanced using either linear
or nonlinear edge enhancement techniques.
Band ratioing Sometimes differences in brightness values from identical surface materials
are caused by topographic slope and aspect, shadows, or seasonal changes in sunlight
illumination angle and intensity. These conditions may hamper the ability of an interpreter or
classification algorithm to identify correctly surface materials or land use in a remotely
sensed image. Fortunately, ratio transformations of the remotely sensed data can, in certain
instances, be applied to reduce the effects of such environmental conditions. In addition to
minimizing the effects of environmental factors, ratios may also provide unique information
not available in any single band that is useful for discriminating between soils and vegetation.
Principal Component Analysis The multispectral image data is usually strongly correlated
from one band to the other. The level of a given picture element on one band can to some
extent be predicted from the level of that same pixel in another band. Principal component
analysis is a pre-processing transformation that creates new images from the uncorrelated
values of different images. This is accomplished by a linear transformation of variables that
corresponds to a rotation and translation of the original coordinate system.